00:00:00.001 Started by upstream project "autotest-per-patch" build number 126184 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.092 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.092 The recommended git tool is: git 00:00:00.092 using credential 00000000-0000-0000-0000-000000000002 00:00:00.094 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.143 Fetching changes from the remote Git repository 00:00:00.145 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.189 Using shallow fetch with depth 1 00:00:00.189 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.189 > git --version # timeout=10 00:00:00.225 > git --version # 'git version 2.39.2' 00:00:00.225 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.253 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.253 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.592 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.608 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.621 Checking out Revision 1e4055c0ee28da4fa0007a72f92a6499a45bf65d (FETCH_HEAD) 00:00:06.621 > git config core.sparsecheckout # timeout=10 00:00:06.636 > git read-tree -mu HEAD # timeout=10 00:00:06.657 > git checkout -f 1e4055c0ee28da4fa0007a72f92a6499a45bf65d # timeout=5 00:00:06.682 Commit message: "packer: Drop centos7" 00:00:06.682 > git rev-list --no-walk 6701643d8262276508f0d19585dfe8a8273a7300 # timeout=10 00:00:06.781 [Pipeline] Start of Pipeline 00:00:06.797 [Pipeline] library 00:00:06.799 Loading library shm_lib@master 00:00:06.799 Library shm_lib@master is cached. Copying from home. 00:00:06.820 [Pipeline] node 00:00:06.832 Running on GP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.834 [Pipeline] { 00:00:06.842 [Pipeline] catchError 00:00:06.843 [Pipeline] { 00:00:06.853 [Pipeline] wrap 00:00:06.859 [Pipeline] { 00:00:06.865 [Pipeline] stage 00:00:06.866 [Pipeline] { (Prologue) 00:00:07.140 [Pipeline] sh 00:00:07.420 + logger -p user.info -t JENKINS-CI 00:00:07.437 [Pipeline] echo 00:00:07.438 Node: GP8 00:00:07.443 [Pipeline] sh 00:00:07.738 [Pipeline] setCustomBuildProperty 00:00:07.749 [Pipeline] echo 00:00:07.751 Cleanup processes 00:00:07.757 [Pipeline] sh 00:00:08.041 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.041 3544014 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.053 [Pipeline] sh 00:00:08.333 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.333 ++ grep -v 'sudo pgrep' 00:00:08.333 ++ awk '{print $1}' 00:00:08.333 + sudo kill -9 00:00:08.333 + true 00:00:08.349 [Pipeline] cleanWs 00:00:08.360 [WS-CLEANUP] Deleting project workspace... 00:00:08.360 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.367 [WS-CLEANUP] done 00:00:08.371 [Pipeline] setCustomBuildProperty 00:00:08.382 [Pipeline] sh 00:00:08.661 + sudo git config --global --replace-all safe.directory '*' 00:00:08.755 [Pipeline] httpRequest 00:00:08.788 [Pipeline] echo 00:00:08.789 Sorcerer 10.211.164.101 is alive 00:00:08.797 [Pipeline] httpRequest 00:00:08.800 HttpMethod: GET 00:00:08.801 URL: http://10.211.164.101/packages/jbp_1e4055c0ee28da4fa0007a72f92a6499a45bf65d.tar.gz 00:00:08.801 Sending request to url: http://10.211.164.101/packages/jbp_1e4055c0ee28da4fa0007a72f92a6499a45bf65d.tar.gz 00:00:08.817 Response Code: HTTP/1.1 200 OK 00:00:08.818 Success: Status code 200 is in the accepted range: 200,404 00:00:08.818 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_1e4055c0ee28da4fa0007a72f92a6499a45bf65d.tar.gz 00:00:12.995 [Pipeline] sh 00:00:13.284 + tar --no-same-owner -xf jbp_1e4055c0ee28da4fa0007a72f92a6499a45bf65d.tar.gz 00:00:13.301 [Pipeline] httpRequest 00:00:13.328 [Pipeline] echo 00:00:13.330 Sorcerer 10.211.164.101 is alive 00:00:13.339 [Pipeline] httpRequest 00:00:13.345 HttpMethod: GET 00:00:13.345 URL: http://10.211.164.101/packages/spdk_b124a6951b907bca069b5a094c467d44b1aa2056.tar.gz 00:00:13.346 Sending request to url: http://10.211.164.101/packages/spdk_b124a6951b907bca069b5a094c467d44b1aa2056.tar.gz 00:00:13.357 Response Code: HTTP/1.1 200 OK 00:00:13.357 Success: Status code 200 is in the accepted range: 200,404 00:00:13.358 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_b124a6951b907bca069b5a094c467d44b1aa2056.tar.gz 00:02:34.043 [Pipeline] sh 00:02:34.328 + tar --no-same-owner -xf spdk_b124a6951b907bca069b5a094c467d44b1aa2056.tar.gz 00:02:37.631 [Pipeline] sh 00:02:37.920 + git -C spdk log --oneline -n5 00:02:37.920 b124a6951 test/make/check_so_deps: Align the SPDK's build opts with spdk-abi's end 00:02:37.920 9cede6267 test/check_so_deps: Enforce release build (non-debug) when requested 00:02:37.920 6151edad3 test/check_so_deps: Simplify check_header_filenames() 00:02:37.920 44e72e4e7 autopackage: Rename autopackage.sh to release_build.sh 00:02:37.920 255871c19 autopackage: Move core of the script to autobuild 00:02:37.933 [Pipeline] } 00:02:37.952 [Pipeline] // stage 00:02:37.962 [Pipeline] stage 00:02:37.964 [Pipeline] { (Prepare) 00:02:37.985 [Pipeline] writeFile 00:02:38.004 [Pipeline] sh 00:02:38.290 + logger -p user.info -t JENKINS-CI 00:02:38.300 [Pipeline] sh 00:02:38.579 + logger -p user.info -t JENKINS-CI 00:02:38.589 [Pipeline] sh 00:02:38.870 + cat autorun-spdk.conf 00:02:38.870 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:38.870 SPDK_TEST_NVMF=1 00:02:38.870 SPDK_TEST_NVME_CLI=1 00:02:38.870 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:38.870 SPDK_TEST_NVMF_NICS=e810 00:02:38.870 SPDK_TEST_VFIOUSER=1 00:02:38.870 SPDK_RUN_UBSAN=1 00:02:38.870 NET_TYPE=phy 00:02:38.878 RUN_NIGHTLY=0 00:02:38.883 [Pipeline] readFile 00:02:38.914 [Pipeline] withEnv 00:02:38.916 [Pipeline] { 00:02:38.931 [Pipeline] sh 00:02:39.214 + set -ex 00:02:39.214 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:39.214 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:39.214 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:39.214 ++ SPDK_TEST_NVMF=1 00:02:39.214 ++ SPDK_TEST_NVME_CLI=1 00:02:39.214 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:39.214 ++ SPDK_TEST_NVMF_NICS=e810 00:02:39.214 ++ SPDK_TEST_VFIOUSER=1 00:02:39.214 ++ SPDK_RUN_UBSAN=1 00:02:39.214 ++ NET_TYPE=phy 00:02:39.214 ++ RUN_NIGHTLY=0 00:02:39.214 + case $SPDK_TEST_NVMF_NICS in 00:02:39.214 + DRIVERS=ice 00:02:39.214 + [[ tcp == \r\d\m\a ]] 00:02:39.214 + [[ -n ice ]] 00:02:39.214 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:39.214 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:39.214 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:39.214 rmmod: ERROR: Module irdma is not currently loaded 00:02:39.214 rmmod: ERROR: Module i40iw is not currently loaded 00:02:39.214 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:39.214 + true 00:02:39.214 + for D in $DRIVERS 00:02:39.214 + sudo modprobe ice 00:02:39.214 + exit 0 00:02:39.224 [Pipeline] } 00:02:39.245 [Pipeline] // withEnv 00:02:39.251 [Pipeline] } 00:02:39.267 [Pipeline] // stage 00:02:39.275 [Pipeline] catchError 00:02:39.277 [Pipeline] { 00:02:39.290 [Pipeline] timeout 00:02:39.290 Timeout set to expire in 50 min 00:02:39.292 [Pipeline] { 00:02:39.307 [Pipeline] stage 00:02:39.310 [Pipeline] { (Tests) 00:02:39.326 [Pipeline] sh 00:02:39.614 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:39.614 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:39.614 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:39.614 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:39.614 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:39.614 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:39.614 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:39.614 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:39.614 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:39.614 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:39.614 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:39.614 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:39.614 + source /etc/os-release 00:02:39.614 ++ NAME='Fedora Linux' 00:02:39.614 ++ VERSION='38 (Cloud Edition)' 00:02:39.614 ++ ID=fedora 00:02:39.614 ++ VERSION_ID=38 00:02:39.614 ++ VERSION_CODENAME= 00:02:39.614 ++ PLATFORM_ID=platform:f38 00:02:39.614 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:39.614 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:39.614 ++ LOGO=fedora-logo-icon 00:02:39.614 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:39.614 ++ HOME_URL=https://fedoraproject.org/ 00:02:39.614 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:39.614 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:39.614 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:39.614 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:39.614 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:39.614 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:39.614 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:39.614 ++ SUPPORT_END=2024-05-14 00:02:39.614 ++ VARIANT='Cloud Edition' 00:02:39.614 ++ VARIANT_ID=cloud 00:02:39.614 + uname -a 00:02:39.614 Linux spdk-gp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:39.614 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:40.552 Hugepages 00:02:40.552 node hugesize free / total 00:02:40.552 node0 1048576kB 0 / 0 00:02:40.552 node0 2048kB 0 / 0 00:02:40.812 node1 1048576kB 0 / 0 00:02:40.812 node1 2048kB 0 / 0 00:02:40.812 00:02:40.812 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:40.812 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:02:40.812 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:02:40.812 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:02:40.812 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:02:40.812 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:02:40.812 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:02:40.812 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:02:40.812 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:02:40.812 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:02:40.812 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:02:40.812 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:02:40.812 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:02:40.812 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:02:40.812 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:02:40.812 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:02:40.812 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:02:40.812 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:02:40.812 + rm -f /tmp/spdk-ld-path 00:02:40.812 + source autorun-spdk.conf 00:02:40.812 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:40.812 ++ SPDK_TEST_NVMF=1 00:02:40.812 ++ SPDK_TEST_NVME_CLI=1 00:02:40.812 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:40.812 ++ SPDK_TEST_NVMF_NICS=e810 00:02:40.812 ++ SPDK_TEST_VFIOUSER=1 00:02:40.812 ++ SPDK_RUN_UBSAN=1 00:02:40.812 ++ NET_TYPE=phy 00:02:40.812 ++ RUN_NIGHTLY=0 00:02:40.813 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:40.813 + [[ -n '' ]] 00:02:40.813 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:40.813 + for M in /var/spdk/build-*-manifest.txt 00:02:40.813 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:40.813 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:40.813 + for M in /var/spdk/build-*-manifest.txt 00:02:40.813 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:40.813 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:40.813 ++ uname 00:02:40.813 + [[ Linux == \L\i\n\u\x ]] 00:02:40.813 + sudo dmesg -T 00:02:40.813 + sudo dmesg --clear 00:02:40.813 + dmesg_pid=3545221 00:02:40.813 + [[ Fedora Linux == FreeBSD ]] 00:02:40.813 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:40.813 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:40.813 + sudo dmesg -Tw 00:02:40.813 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:40.813 + [[ -x /usr/src/fio-static/fio ]] 00:02:40.813 + export FIO_BIN=/usr/src/fio-static/fio 00:02:40.813 + FIO_BIN=/usr/src/fio-static/fio 00:02:40.813 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:40.813 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:40.813 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:40.813 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:40.813 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:40.813 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:40.813 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:40.813 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:40.813 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:40.813 Test configuration: 00:02:40.813 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:40.813 SPDK_TEST_NVMF=1 00:02:40.813 SPDK_TEST_NVME_CLI=1 00:02:40.813 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:40.813 SPDK_TEST_NVMF_NICS=e810 00:02:40.813 SPDK_TEST_VFIOUSER=1 00:02:40.813 SPDK_RUN_UBSAN=1 00:02:40.813 NET_TYPE=phy 00:02:40.813 RUN_NIGHTLY=0 13:41:35 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:40.813 13:41:35 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:40.813 13:41:35 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:40.813 13:41:35 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:40.813 13:41:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:40.813 13:41:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:40.813 13:41:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:40.813 13:41:35 -- paths/export.sh@5 -- $ export PATH 00:02:40.813 13:41:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:40.813 13:41:35 -- common/autobuild_common.sh@472 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:40.813 13:41:35 -- common/autobuild_common.sh@473 -- $ date +%s 00:02:41.072 13:41:35 -- common/autobuild_common.sh@473 -- $ mktemp -dt spdk_1721043695.XXXXXX 00:02:41.072 13:41:35 -- common/autobuild_common.sh@473 -- $ SPDK_WORKSPACE=/tmp/spdk_1721043695.dnubsT 00:02:41.072 13:41:35 -- common/autobuild_common.sh@475 -- $ [[ -n '' ]] 00:02:41.072 13:41:35 -- common/autobuild_common.sh@479 -- $ '[' -n '' ']' 00:02:41.072 13:41:35 -- common/autobuild_common.sh@482 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:41.072 13:41:35 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:41.072 13:41:35 -- common/autobuild_common.sh@488 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:41.072 13:41:35 -- common/autobuild_common.sh@489 -- $ get_config_params 00:02:41.072 13:41:35 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:41.072 13:41:35 -- common/autotest_common.sh@10 -- $ set +x 00:02:41.072 13:41:35 -- common/autobuild_common.sh@489 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:41.072 13:41:35 -- common/autobuild_common.sh@491 -- $ start_monitor_resources 00:02:41.072 13:41:35 -- pm/common@17 -- $ local monitor 00:02:41.072 13:41:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.072 13:41:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.072 13:41:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.072 13:41:35 -- pm/common@21 -- $ date +%s 00:02:41.072 13:41:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.072 13:41:35 -- pm/common@21 -- $ date +%s 00:02:41.072 13:41:35 -- pm/common@25 -- $ sleep 1 00:02:41.072 13:41:35 -- pm/common@21 -- $ date +%s 00:02:41.072 13:41:35 -- pm/common@21 -- $ date +%s 00:02:41.072 13:41:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721043695 00:02:41.072 13:41:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721043695 00:02:41.072 13:41:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721043695 00:02:41.072 13:41:35 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721043695 00:02:41.072 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721043695_collect-vmstat.pm.log 00:02:41.072 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721043695_collect-cpu-load.pm.log 00:02:41.072 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721043695_collect-cpu-temp.pm.log 00:02:41.072 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721043695_collect-bmc-pm.bmc.pm.log 00:02:42.009 13:41:36 -- common/autobuild_common.sh@492 -- $ trap stop_monitor_resources EXIT 00:02:42.009 13:41:36 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:42.009 13:41:36 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:42.009 13:41:36 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:42.009 13:41:36 -- spdk/autobuild.sh@16 -- $ date -u 00:02:42.009 Mon Jul 15 11:41:36 AM UTC 2024 00:02:42.009 13:41:36 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:42.009 v24.09-pre-208-gb124a6951 00:02:42.009 13:41:36 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:42.009 13:41:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:42.009 13:41:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:42.009 13:41:36 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:42.009 13:41:36 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:42.009 13:41:36 -- common/autotest_common.sh@10 -- $ set +x 00:02:42.009 ************************************ 00:02:42.009 START TEST ubsan 00:02:42.009 ************************************ 00:02:42.009 13:41:36 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:42.009 using ubsan 00:02:42.009 00:02:42.009 real 0m0.000s 00:02:42.009 user 0m0.000s 00:02:42.009 sys 0m0.000s 00:02:42.009 13:41:36 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:42.009 13:41:36 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:42.009 ************************************ 00:02:42.009 END TEST ubsan 00:02:42.009 ************************************ 00:02:42.009 13:41:36 -- common/autotest_common.sh@1142 -- $ return 0 00:02:42.009 13:41:36 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:42.009 13:41:36 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:42.009 13:41:36 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:42.009 13:41:36 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:42.009 13:41:36 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:42.009 13:41:36 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:42.009 13:41:36 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:42.009 13:41:36 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:42.009 13:41:36 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:42.009 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:42.009 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:42.579 Using 'verbs' RDMA provider 00:02:53.179 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:03.169 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:03.169 Creating mk/config.mk...done. 00:03:03.169 Creating mk/cc.flags.mk...done. 00:03:03.169 Type 'make' to build. 00:03:03.169 13:41:57 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:03.169 13:41:57 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:03.169 13:41:57 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:03.169 13:41:57 -- common/autotest_common.sh@10 -- $ set +x 00:03:03.169 ************************************ 00:03:03.169 START TEST make 00:03:03.169 ************************************ 00:03:03.169 13:41:57 make -- common/autotest_common.sh@1123 -- $ make -j48 00:03:03.169 make[1]: Nothing to be done for 'all'. 00:03:04.566 The Meson build system 00:03:04.566 Version: 1.3.1 00:03:04.566 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:04.566 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:04.566 Build type: native build 00:03:04.566 Project name: libvfio-user 00:03:04.566 Project version: 0.0.1 00:03:04.566 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:04.566 C linker for the host machine: cc ld.bfd 2.39-16 00:03:04.566 Host machine cpu family: x86_64 00:03:04.566 Host machine cpu: x86_64 00:03:04.566 Run-time dependency threads found: YES 00:03:04.566 Library dl found: YES 00:03:04.566 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:04.566 Run-time dependency json-c found: YES 0.17 00:03:04.566 Run-time dependency cmocka found: YES 1.1.7 00:03:04.566 Program pytest-3 found: NO 00:03:04.566 Program flake8 found: NO 00:03:04.566 Program misspell-fixer found: NO 00:03:04.566 Program restructuredtext-lint found: NO 00:03:04.566 Program valgrind found: YES (/usr/bin/valgrind) 00:03:04.566 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:04.566 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:04.566 Compiler for C supports arguments -Wwrite-strings: YES 00:03:04.566 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:04.566 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:04.566 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:04.566 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:04.566 Build targets in project: 8 00:03:04.566 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:04.566 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:04.566 00:03:04.566 libvfio-user 0.0.1 00:03:04.566 00:03:04.566 User defined options 00:03:04.566 buildtype : debug 00:03:04.566 default_library: shared 00:03:04.566 libdir : /usr/local/lib 00:03:04.566 00:03:04.566 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:05.152 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:05.152 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:05.410 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:05.410 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:05.410 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:05.410 [5/37] Compiling C object samples/null.p/null.c.o 00:03:05.410 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:05.410 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:05.410 [8/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:05.410 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:05.410 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:05.410 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:05.410 [12/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:05.410 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:05.410 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:05.410 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:05.410 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:05.410 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:05.410 [18/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:05.410 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:05.410 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:05.410 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:05.410 [22/37] Compiling C object samples/server.p/server.c.o 00:03:05.410 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:05.410 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:05.673 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:05.673 [26/37] Compiling C object samples/client.p/client.c.o 00:03:05.673 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:05.673 [28/37] Linking target samples/client 00:03:05.673 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:05.673 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:05.673 [31/37] Linking target test/unit_tests 00:03:05.935 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:05.935 [33/37] Linking target samples/null 00:03:05.935 [34/37] Linking target samples/server 00:03:05.935 [35/37] Linking target samples/lspci 00:03:05.935 [36/37] Linking target samples/gpio-pci-idio-16 00:03:05.935 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:05.935 INFO: autodetecting backend as ninja 00:03:05.935 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:05.935 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:06.878 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:06.878 ninja: no work to do. 00:03:11.059 The Meson build system 00:03:11.059 Version: 1.3.1 00:03:11.059 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:11.059 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:11.059 Build type: native build 00:03:11.059 Program cat found: YES (/usr/bin/cat) 00:03:11.059 Project name: DPDK 00:03:11.059 Project version: 24.03.0 00:03:11.059 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:11.059 C linker for the host machine: cc ld.bfd 2.39-16 00:03:11.059 Host machine cpu family: x86_64 00:03:11.059 Host machine cpu: x86_64 00:03:11.059 Message: ## Building in Developer Mode ## 00:03:11.059 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:11.059 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:11.059 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:11.059 Program python3 found: YES (/usr/bin/python3) 00:03:11.059 Program cat found: YES (/usr/bin/cat) 00:03:11.059 Compiler for C supports arguments -march=native: YES 00:03:11.059 Checking for size of "void *" : 8 00:03:11.059 Checking for size of "void *" : 8 (cached) 00:03:11.059 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:03:11.059 Library m found: YES 00:03:11.059 Library numa found: YES 00:03:11.059 Has header "numaif.h" : YES 00:03:11.059 Library fdt found: NO 00:03:11.059 Library execinfo found: NO 00:03:11.059 Has header "execinfo.h" : YES 00:03:11.059 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:11.059 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:11.059 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:11.059 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:11.059 Run-time dependency openssl found: YES 3.0.9 00:03:11.059 Run-time dependency libpcap found: YES 1.10.4 00:03:11.059 Has header "pcap.h" with dependency libpcap: YES 00:03:11.059 Compiler for C supports arguments -Wcast-qual: YES 00:03:11.059 Compiler for C supports arguments -Wdeprecated: YES 00:03:11.059 Compiler for C supports arguments -Wformat: YES 00:03:11.059 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:11.059 Compiler for C supports arguments -Wformat-security: NO 00:03:11.059 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:11.059 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:11.059 Compiler for C supports arguments -Wnested-externs: YES 00:03:11.059 Compiler for C supports arguments -Wold-style-definition: YES 00:03:11.059 Compiler for C supports arguments -Wpointer-arith: YES 00:03:11.059 Compiler for C supports arguments -Wsign-compare: YES 00:03:11.059 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:11.059 Compiler for C supports arguments -Wundef: YES 00:03:11.059 Compiler for C supports arguments -Wwrite-strings: YES 00:03:11.059 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:11.059 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:11.059 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:11.059 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:11.059 Program objdump found: YES (/usr/bin/objdump) 00:03:11.059 Compiler for C supports arguments -mavx512f: YES 00:03:11.059 Checking if "AVX512 checking" compiles: YES 00:03:11.059 Fetching value of define "__SSE4_2__" : 1 00:03:11.059 Fetching value of define "__AES__" : 1 00:03:11.059 Fetching value of define "__AVX__" : 1 00:03:11.059 Fetching value of define "__AVX2__" : (undefined) 00:03:11.059 Fetching value of define "__AVX512BW__" : (undefined) 00:03:11.059 Fetching value of define "__AVX512CD__" : (undefined) 00:03:11.059 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:11.059 Fetching value of define "__AVX512F__" : (undefined) 00:03:11.059 Fetching value of define "__AVX512VL__" : (undefined) 00:03:11.059 Fetching value of define "__PCLMUL__" : 1 00:03:11.059 Fetching value of define "__RDRND__" : 1 00:03:11.059 Fetching value of define "__RDSEED__" : (undefined) 00:03:11.059 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:11.059 Fetching value of define "__znver1__" : (undefined) 00:03:11.059 Fetching value of define "__znver2__" : (undefined) 00:03:11.059 Fetching value of define "__znver3__" : (undefined) 00:03:11.059 Fetching value of define "__znver4__" : (undefined) 00:03:11.059 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:11.059 Message: lib/log: Defining dependency "log" 00:03:11.059 Message: lib/kvargs: Defining dependency "kvargs" 00:03:11.059 Message: lib/telemetry: Defining dependency "telemetry" 00:03:11.059 Checking for function "getentropy" : NO 00:03:11.059 Message: lib/eal: Defining dependency "eal" 00:03:11.059 Message: lib/ring: Defining dependency "ring" 00:03:11.059 Message: lib/rcu: Defining dependency "rcu" 00:03:11.059 Message: lib/mempool: Defining dependency "mempool" 00:03:11.060 Message: lib/mbuf: Defining dependency "mbuf" 00:03:11.060 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:11.060 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:11.060 Compiler for C supports arguments -mpclmul: YES 00:03:11.060 Compiler for C supports arguments -maes: YES 00:03:11.060 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:11.060 Compiler for C supports arguments -mavx512bw: YES 00:03:11.060 Compiler for C supports arguments -mavx512dq: YES 00:03:11.060 Compiler for C supports arguments -mavx512vl: YES 00:03:11.060 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:11.060 Compiler for C supports arguments -mavx2: YES 00:03:11.060 Compiler for C supports arguments -mavx: YES 00:03:11.060 Message: lib/net: Defining dependency "net" 00:03:11.060 Message: lib/meter: Defining dependency "meter" 00:03:11.060 Message: lib/ethdev: Defining dependency "ethdev" 00:03:11.060 Message: lib/pci: Defining dependency "pci" 00:03:11.060 Message: lib/cmdline: Defining dependency "cmdline" 00:03:11.060 Message: lib/hash: Defining dependency "hash" 00:03:11.060 Message: lib/timer: Defining dependency "timer" 00:03:11.060 Message: lib/compressdev: Defining dependency "compressdev" 00:03:11.060 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:11.060 Message: lib/dmadev: Defining dependency "dmadev" 00:03:11.060 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:11.060 Message: lib/power: Defining dependency "power" 00:03:11.060 Message: lib/reorder: Defining dependency "reorder" 00:03:11.060 Message: lib/security: Defining dependency "security" 00:03:11.060 Has header "linux/userfaultfd.h" : YES 00:03:11.060 Has header "linux/vduse.h" : YES 00:03:11.060 Message: lib/vhost: Defining dependency "vhost" 00:03:11.060 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:11.060 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:11.060 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:11.060 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:11.060 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:11.060 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:11.060 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:11.060 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:11.060 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:11.060 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:11.060 Program doxygen found: YES (/usr/bin/doxygen) 00:03:11.060 Configuring doxy-api-html.conf using configuration 00:03:11.060 Configuring doxy-api-man.conf using configuration 00:03:11.060 Program mandb found: YES (/usr/bin/mandb) 00:03:11.060 Program sphinx-build found: NO 00:03:11.060 Configuring rte_build_config.h using configuration 00:03:11.060 Message: 00:03:11.060 ================= 00:03:11.060 Applications Enabled 00:03:11.060 ================= 00:03:11.060 00:03:11.060 apps: 00:03:11.060 00:03:11.060 00:03:11.060 Message: 00:03:11.060 ================= 00:03:11.060 Libraries Enabled 00:03:11.060 ================= 00:03:11.060 00:03:11.060 libs: 00:03:11.060 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:11.060 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:11.060 cryptodev, dmadev, power, reorder, security, vhost, 00:03:11.060 00:03:11.060 Message: 00:03:11.060 =============== 00:03:11.060 Drivers Enabled 00:03:11.060 =============== 00:03:11.060 00:03:11.060 common: 00:03:11.060 00:03:11.060 bus: 00:03:11.060 pci, vdev, 00:03:11.060 mempool: 00:03:11.060 ring, 00:03:11.060 dma: 00:03:11.060 00:03:11.060 net: 00:03:11.060 00:03:11.060 crypto: 00:03:11.060 00:03:11.060 compress: 00:03:11.060 00:03:11.060 vdpa: 00:03:11.060 00:03:11.060 00:03:11.060 Message: 00:03:11.060 ================= 00:03:11.060 Content Skipped 00:03:11.060 ================= 00:03:11.060 00:03:11.060 apps: 00:03:11.060 dumpcap: explicitly disabled via build config 00:03:11.060 graph: explicitly disabled via build config 00:03:11.060 pdump: explicitly disabled via build config 00:03:11.060 proc-info: explicitly disabled via build config 00:03:11.060 test-acl: explicitly disabled via build config 00:03:11.060 test-bbdev: explicitly disabled via build config 00:03:11.060 test-cmdline: explicitly disabled via build config 00:03:11.060 test-compress-perf: explicitly disabled via build config 00:03:11.060 test-crypto-perf: explicitly disabled via build config 00:03:11.060 test-dma-perf: explicitly disabled via build config 00:03:11.060 test-eventdev: explicitly disabled via build config 00:03:11.060 test-fib: explicitly disabled via build config 00:03:11.060 test-flow-perf: explicitly disabled via build config 00:03:11.060 test-gpudev: explicitly disabled via build config 00:03:11.060 test-mldev: explicitly disabled via build config 00:03:11.060 test-pipeline: explicitly disabled via build config 00:03:11.060 test-pmd: explicitly disabled via build config 00:03:11.060 test-regex: explicitly disabled via build config 00:03:11.060 test-sad: explicitly disabled via build config 00:03:11.060 test-security-perf: explicitly disabled via build config 00:03:11.060 00:03:11.060 libs: 00:03:11.060 argparse: explicitly disabled via build config 00:03:11.060 metrics: explicitly disabled via build config 00:03:11.060 acl: explicitly disabled via build config 00:03:11.060 bbdev: explicitly disabled via build config 00:03:11.060 bitratestats: explicitly disabled via build config 00:03:11.060 bpf: explicitly disabled via build config 00:03:11.060 cfgfile: explicitly disabled via build config 00:03:11.060 distributor: explicitly disabled via build config 00:03:11.060 efd: explicitly disabled via build config 00:03:11.060 eventdev: explicitly disabled via build config 00:03:11.060 dispatcher: explicitly disabled via build config 00:03:11.060 gpudev: explicitly disabled via build config 00:03:11.060 gro: explicitly disabled via build config 00:03:11.060 gso: explicitly disabled via build config 00:03:11.060 ip_frag: explicitly disabled via build config 00:03:11.060 jobstats: explicitly disabled via build config 00:03:11.060 latencystats: explicitly disabled via build config 00:03:11.060 lpm: explicitly disabled via build config 00:03:11.060 member: explicitly disabled via build config 00:03:11.060 pcapng: explicitly disabled via build config 00:03:11.060 rawdev: explicitly disabled via build config 00:03:11.060 regexdev: explicitly disabled via build config 00:03:11.060 mldev: explicitly disabled via build config 00:03:11.060 rib: explicitly disabled via build config 00:03:11.060 sched: explicitly disabled via build config 00:03:11.060 stack: explicitly disabled via build config 00:03:11.060 ipsec: explicitly disabled via build config 00:03:11.060 pdcp: explicitly disabled via build config 00:03:11.060 fib: explicitly disabled via build config 00:03:11.060 port: explicitly disabled via build config 00:03:11.060 pdump: explicitly disabled via build config 00:03:11.060 table: explicitly disabled via build config 00:03:11.060 pipeline: explicitly disabled via build config 00:03:11.060 graph: explicitly disabled via build config 00:03:11.060 node: explicitly disabled via build config 00:03:11.060 00:03:11.060 drivers: 00:03:11.060 common/cpt: not in enabled drivers build config 00:03:11.060 common/dpaax: not in enabled drivers build config 00:03:11.060 common/iavf: not in enabled drivers build config 00:03:11.060 common/idpf: not in enabled drivers build config 00:03:11.060 common/ionic: not in enabled drivers build config 00:03:11.060 common/mvep: not in enabled drivers build config 00:03:11.060 common/octeontx: not in enabled drivers build config 00:03:11.060 bus/auxiliary: not in enabled drivers build config 00:03:11.060 bus/cdx: not in enabled drivers build config 00:03:11.060 bus/dpaa: not in enabled drivers build config 00:03:11.060 bus/fslmc: not in enabled drivers build config 00:03:11.060 bus/ifpga: not in enabled drivers build config 00:03:11.060 bus/platform: not in enabled drivers build config 00:03:11.060 bus/uacce: not in enabled drivers build config 00:03:11.060 bus/vmbus: not in enabled drivers build config 00:03:11.060 common/cnxk: not in enabled drivers build config 00:03:11.060 common/mlx5: not in enabled drivers build config 00:03:11.060 common/nfp: not in enabled drivers build config 00:03:11.060 common/nitrox: not in enabled drivers build config 00:03:11.060 common/qat: not in enabled drivers build config 00:03:11.060 common/sfc_efx: not in enabled drivers build config 00:03:11.060 mempool/bucket: not in enabled drivers build config 00:03:11.060 mempool/cnxk: not in enabled drivers build config 00:03:11.060 mempool/dpaa: not in enabled drivers build config 00:03:11.060 mempool/dpaa2: not in enabled drivers build config 00:03:11.060 mempool/octeontx: not in enabled drivers build config 00:03:11.060 mempool/stack: not in enabled drivers build config 00:03:11.060 dma/cnxk: not in enabled drivers build config 00:03:11.060 dma/dpaa: not in enabled drivers build config 00:03:11.060 dma/dpaa2: not in enabled drivers build config 00:03:11.060 dma/hisilicon: not in enabled drivers build config 00:03:11.060 dma/idxd: not in enabled drivers build config 00:03:11.060 dma/ioat: not in enabled drivers build config 00:03:11.060 dma/skeleton: not in enabled drivers build config 00:03:11.060 net/af_packet: not in enabled drivers build config 00:03:11.060 net/af_xdp: not in enabled drivers build config 00:03:11.060 net/ark: not in enabled drivers build config 00:03:11.060 net/atlantic: not in enabled drivers build config 00:03:11.060 net/avp: not in enabled drivers build config 00:03:11.060 net/axgbe: not in enabled drivers build config 00:03:11.060 net/bnx2x: not in enabled drivers build config 00:03:11.060 net/bnxt: not in enabled drivers build config 00:03:11.060 net/bonding: not in enabled drivers build config 00:03:11.060 net/cnxk: not in enabled drivers build config 00:03:11.060 net/cpfl: not in enabled drivers build config 00:03:11.060 net/cxgbe: not in enabled drivers build config 00:03:11.060 net/dpaa: not in enabled drivers build config 00:03:11.060 net/dpaa2: not in enabled drivers build config 00:03:11.060 net/e1000: not in enabled drivers build config 00:03:11.060 net/ena: not in enabled drivers build config 00:03:11.060 net/enetc: not in enabled drivers build config 00:03:11.060 net/enetfec: not in enabled drivers build config 00:03:11.060 net/enic: not in enabled drivers build config 00:03:11.060 net/failsafe: not in enabled drivers build config 00:03:11.060 net/fm10k: not in enabled drivers build config 00:03:11.060 net/gve: not in enabled drivers build config 00:03:11.060 net/hinic: not in enabled drivers build config 00:03:11.060 net/hns3: not in enabled drivers build config 00:03:11.061 net/i40e: not in enabled drivers build config 00:03:11.061 net/iavf: not in enabled drivers build config 00:03:11.061 net/ice: not in enabled drivers build config 00:03:11.061 net/idpf: not in enabled drivers build config 00:03:11.061 net/igc: not in enabled drivers build config 00:03:11.061 net/ionic: not in enabled drivers build config 00:03:11.061 net/ipn3ke: not in enabled drivers build config 00:03:11.061 net/ixgbe: not in enabled drivers build config 00:03:11.061 net/mana: not in enabled drivers build config 00:03:11.061 net/memif: not in enabled drivers build config 00:03:11.061 net/mlx4: not in enabled drivers build config 00:03:11.061 net/mlx5: not in enabled drivers build config 00:03:11.061 net/mvneta: not in enabled drivers build config 00:03:11.061 net/mvpp2: not in enabled drivers build config 00:03:11.061 net/netvsc: not in enabled drivers build config 00:03:11.061 net/nfb: not in enabled drivers build config 00:03:11.061 net/nfp: not in enabled drivers build config 00:03:11.061 net/ngbe: not in enabled drivers build config 00:03:11.061 net/null: not in enabled drivers build config 00:03:11.061 net/octeontx: not in enabled drivers build config 00:03:11.061 net/octeon_ep: not in enabled drivers build config 00:03:11.061 net/pcap: not in enabled drivers build config 00:03:11.061 net/pfe: not in enabled drivers build config 00:03:11.061 net/qede: not in enabled drivers build config 00:03:11.061 net/ring: not in enabled drivers build config 00:03:11.061 net/sfc: not in enabled drivers build config 00:03:11.061 net/softnic: not in enabled drivers build config 00:03:11.061 net/tap: not in enabled drivers build config 00:03:11.061 net/thunderx: not in enabled drivers build config 00:03:11.061 net/txgbe: not in enabled drivers build config 00:03:11.061 net/vdev_netvsc: not in enabled drivers build config 00:03:11.061 net/vhost: not in enabled drivers build config 00:03:11.061 net/virtio: not in enabled drivers build config 00:03:11.061 net/vmxnet3: not in enabled drivers build config 00:03:11.061 raw/*: missing internal dependency, "rawdev" 00:03:11.061 crypto/armv8: not in enabled drivers build config 00:03:11.061 crypto/bcmfs: not in enabled drivers build config 00:03:11.061 crypto/caam_jr: not in enabled drivers build config 00:03:11.061 crypto/ccp: not in enabled drivers build config 00:03:11.061 crypto/cnxk: not in enabled drivers build config 00:03:11.061 crypto/dpaa_sec: not in enabled drivers build config 00:03:11.061 crypto/dpaa2_sec: not in enabled drivers build config 00:03:11.061 crypto/ipsec_mb: not in enabled drivers build config 00:03:11.061 crypto/mlx5: not in enabled drivers build config 00:03:11.061 crypto/mvsam: not in enabled drivers build config 00:03:11.061 crypto/nitrox: not in enabled drivers build config 00:03:11.061 crypto/null: not in enabled drivers build config 00:03:11.061 crypto/octeontx: not in enabled drivers build config 00:03:11.061 crypto/openssl: not in enabled drivers build config 00:03:11.061 crypto/scheduler: not in enabled drivers build config 00:03:11.061 crypto/uadk: not in enabled drivers build config 00:03:11.061 crypto/virtio: not in enabled drivers build config 00:03:11.061 compress/isal: not in enabled drivers build config 00:03:11.061 compress/mlx5: not in enabled drivers build config 00:03:11.061 compress/nitrox: not in enabled drivers build config 00:03:11.061 compress/octeontx: not in enabled drivers build config 00:03:11.061 compress/zlib: not in enabled drivers build config 00:03:11.061 regex/*: missing internal dependency, "regexdev" 00:03:11.061 ml/*: missing internal dependency, "mldev" 00:03:11.061 vdpa/ifc: not in enabled drivers build config 00:03:11.061 vdpa/mlx5: not in enabled drivers build config 00:03:11.061 vdpa/nfp: not in enabled drivers build config 00:03:11.061 vdpa/sfc: not in enabled drivers build config 00:03:11.061 event/*: missing internal dependency, "eventdev" 00:03:11.061 baseband/*: missing internal dependency, "bbdev" 00:03:11.061 gpu/*: missing internal dependency, "gpudev" 00:03:11.061 00:03:11.061 00:03:11.319 Build targets in project: 85 00:03:11.319 00:03:11.319 DPDK 24.03.0 00:03:11.319 00:03:11.319 User defined options 00:03:11.319 buildtype : debug 00:03:11.319 default_library : shared 00:03:11.319 libdir : lib 00:03:11.319 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:11.319 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:11.319 c_link_args : 00:03:11.319 cpu_instruction_set: native 00:03:11.319 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:03:11.319 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:03:11.319 enable_docs : false 00:03:11.319 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:11.319 enable_kmods : false 00:03:11.319 max_lcores : 128 00:03:11.319 tests : false 00:03:11.319 00:03:11.319 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:11.888 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:11.888 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:11.889 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:11.889 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:11.889 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:11.889 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:11.889 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:11.889 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:11.889 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:11.889 [9/268] Linking static target lib/librte_kvargs.a 00:03:11.889 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:11.889 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:12.152 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:12.152 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:12.152 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:12.152 [15/268] Linking static target lib/librte_log.a 00:03:12.152 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:12.724 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.724 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:12.724 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:12.724 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:12.724 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:12.724 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:12.724 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:12.724 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:12.724 [25/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:12.983 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:12.983 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:12.983 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:12.983 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:12.983 [30/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:12.983 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:12.983 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:12.984 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:12.984 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:12.984 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:12.984 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:12.984 [37/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:12.984 [38/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:12.984 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:12.984 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:12.984 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:12.984 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:12.984 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:12.984 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:12.984 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:12.984 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:12.984 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:12.984 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:12.984 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:12.984 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:12.984 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:12.984 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:12.984 [53/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:12.984 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:12.984 [55/268] Linking static target lib/librte_telemetry.a 00:03:12.984 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:12.984 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:12.984 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:12.984 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:13.251 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:13.251 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:13.251 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:13.251 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:13.251 [64/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.251 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:13.251 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:13.251 [67/268] Linking target lib/librte_log.so.24.1 00:03:13.514 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:13.514 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:13.514 [70/268] Linking static target lib/librte_pci.a 00:03:13.514 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:13.514 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:13.775 [73/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:13.775 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:13.775 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:13.775 [76/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:13.775 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:13.775 [78/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:13.775 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:13.775 [80/268] Linking target lib/librte_kvargs.so.24.1 00:03:13.775 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:13.775 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:13.775 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:13.775 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:13.775 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:13.775 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:13.775 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:13.775 [88/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:13.775 [89/268] Linking static target lib/librte_ring.a 00:03:13.775 [90/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:14.037 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:14.037 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:14.037 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:14.037 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:14.037 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:14.037 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:14.037 [97/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:14.037 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:14.037 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:14.037 [100/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:14.037 [101/268] Linking static target lib/librte_meter.a 00:03:14.037 [102/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.037 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:14.037 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:14.037 [105/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.037 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:14.037 [107/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:14.037 [108/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:14.037 [109/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:14.037 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:14.037 [111/268] Linking target lib/librte_telemetry.so.24.1 00:03:14.037 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:14.037 [113/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:14.037 [114/268] Linking static target lib/librte_rcu.a 00:03:14.299 [115/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:14.299 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:14.299 [117/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:14.299 [118/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:14.299 [119/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:14.299 [120/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:14.299 [121/268] Linking static target lib/librte_mempool.a 00:03:14.299 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:14.299 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:14.299 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:14.299 [125/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:14.299 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:14.299 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:14.299 [128/268] Linking static target lib/librte_eal.a 00:03:14.299 [129/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:14.299 [130/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:14.299 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:14.299 [132/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.560 [133/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.560 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:14.560 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:14.560 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:14.560 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:14.560 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:14.560 [139/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:14.560 [140/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.821 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:14.821 [142/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:14.821 [143/268] Linking static target lib/librte_net.a 00:03:14.821 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:14.821 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:14.821 [146/268] Linking static target lib/librte_cmdline.a 00:03:14.821 [147/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:14.821 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:14.821 [149/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:14.821 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:15.080 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:15.080 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:15.080 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:15.080 [154/268] Linking static target lib/librte_timer.a 00:03:15.080 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:15.080 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:15.080 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:15.081 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:15.081 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:15.081 [160/268] Linking static target lib/librte_dmadev.a 00:03:15.081 [161/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.339 [162/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:15.339 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:15.339 [164/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:15.339 [165/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:15.339 [166/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.339 [167/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:15.339 [168/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:15.339 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:15.339 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:15.339 [171/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.339 [172/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:15.339 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:15.339 [174/268] Linking static target lib/librte_power.a 00:03:15.339 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:15.339 [176/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:15.596 [177/268] Linking static target lib/librte_compressdev.a 00:03:15.596 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:15.596 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:15.596 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:15.596 [181/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:15.597 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:15.597 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:15.597 [184/268] Linking static target lib/librte_hash.a 00:03:15.597 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:15.597 [186/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:15.597 [187/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:15.597 [188/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:15.597 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:15.597 [190/268] Linking static target lib/librte_reorder.a 00:03:15.597 [191/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.597 [192/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:15.597 [193/268] Linking static target lib/librte_mbuf.a 00:03:15.597 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:15.597 [195/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.855 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:15.855 [197/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:15.855 [198/268] Linking static target lib/librte_security.a 00:03:15.855 [199/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:15.855 [200/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:15.855 [201/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:15.855 [202/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:15.855 [203/268] Linking static target drivers/librte_bus_vdev.a 00:03:15.855 [204/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:15.855 [205/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:15.855 [206/268] Linking static target drivers/librte_bus_pci.a 00:03:15.855 [207/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.855 [208/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:15.855 [209/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.855 [210/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:15.855 [211/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.113 [212/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.113 [213/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:16.113 [214/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.113 [215/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.113 [216/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:16.113 [217/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:16.113 [218/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:16.113 [219/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:16.113 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.113 [221/268] Linking static target drivers/librte_mempool_ring.a 00:03:16.371 [222/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:16.371 [223/268] Linking static target lib/librte_ethdev.a 00:03:16.371 [224/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.628 [225/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:16.628 [226/268] Linking static target lib/librte_cryptodev.a 00:03:17.559 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.492 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:20.396 [229/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.396 [230/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.653 [231/268] Linking target lib/librte_eal.so.24.1 00:03:20.653 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:20.653 [233/268] Linking target lib/librte_meter.so.24.1 00:03:20.653 [234/268] Linking target lib/librte_pci.so.24.1 00:03:20.653 [235/268] Linking target lib/librte_dmadev.so.24.1 00:03:20.653 [236/268] Linking target lib/librte_ring.so.24.1 00:03:20.653 [237/268] Linking target lib/librte_timer.so.24.1 00:03:20.653 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:20.910 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:20.910 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:20.910 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:20.910 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:20.910 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:20.910 [244/268] Linking target lib/librte_rcu.so.24.1 00:03:20.910 [245/268] Linking target lib/librte_mempool.so.24.1 00:03:20.910 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:20.910 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:20.910 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:21.167 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:21.167 [250/268] Linking target lib/librte_mbuf.so.24.1 00:03:21.167 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:21.167 [252/268] Linking target lib/librte_reorder.so.24.1 00:03:21.167 [253/268] Linking target lib/librte_compressdev.so.24.1 00:03:21.167 [254/268] Linking target lib/librte_net.so.24.1 00:03:21.167 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:03:21.425 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:21.425 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:21.425 [258/268] Linking target lib/librte_security.so.24.1 00:03:21.425 [259/268] Linking target lib/librte_cmdline.so.24.1 00:03:21.425 [260/268] Linking target lib/librte_hash.so.24.1 00:03:21.425 [261/268] Linking target lib/librte_ethdev.so.24.1 00:03:21.425 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:21.425 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:21.682 [264/268] Linking target lib/librte_power.so.24.1 00:03:24.236 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:24.236 [266/268] Linking static target lib/librte_vhost.a 00:03:25.173 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.173 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:25.173 INFO: autodetecting backend as ninja 00:03:25.173 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:03:26.108 CC lib/ut/ut.o 00:03:26.108 CC lib/log/log.o 00:03:26.108 CC lib/log/log_flags.o 00:03:26.108 CC lib/log/log_deprecated.o 00:03:26.108 CC lib/ut_mock/mock.o 00:03:26.108 LIB libspdk_ut.a 00:03:26.108 LIB libspdk_log.a 00:03:26.108 LIB libspdk_ut_mock.a 00:03:26.108 SO libspdk_ut.so.2.0 00:03:26.108 SO libspdk_log.so.7.0 00:03:26.108 SO libspdk_ut_mock.so.6.0 00:03:26.108 SYMLINK libspdk_ut.so 00:03:26.366 SYMLINK libspdk_ut_mock.so 00:03:26.366 SYMLINK libspdk_log.so 00:03:26.366 CC lib/dma/dma.o 00:03:26.366 CC lib/util/base64.o 00:03:26.366 CC lib/util/bit_array.o 00:03:26.366 CC lib/ioat/ioat.o 00:03:26.366 CXX lib/trace_parser/trace.o 00:03:26.366 CC lib/util/cpuset.o 00:03:26.366 CC lib/util/crc16.o 00:03:26.366 CC lib/util/crc32.o 00:03:26.366 CC lib/util/crc32c.o 00:03:26.366 CC lib/util/crc32_ieee.o 00:03:26.366 CC lib/util/crc64.o 00:03:26.366 CC lib/util/dif.o 00:03:26.366 CC lib/util/fd.o 00:03:26.366 CC lib/util/file.o 00:03:26.366 CC lib/util/hexlify.o 00:03:26.366 CC lib/util/iov.o 00:03:26.366 CC lib/util/math.o 00:03:26.366 CC lib/util/pipe.o 00:03:26.366 CC lib/util/strerror_tls.o 00:03:26.366 CC lib/util/string.o 00:03:26.366 CC lib/util/uuid.o 00:03:26.366 CC lib/util/fd_group.o 00:03:26.366 CC lib/util/xor.o 00:03:26.366 CC lib/util/zipf.o 00:03:26.623 CC lib/vfio_user/host/vfio_user_pci.o 00:03:26.623 CC lib/vfio_user/host/vfio_user.o 00:03:26.623 LIB libspdk_dma.a 00:03:26.623 SO libspdk_dma.so.4.0 00:03:26.623 SYMLINK libspdk_dma.so 00:03:26.623 LIB libspdk_ioat.a 00:03:26.623 SO libspdk_ioat.so.7.0 00:03:26.880 LIB libspdk_vfio_user.a 00:03:26.880 SYMLINK libspdk_ioat.so 00:03:26.880 SO libspdk_vfio_user.so.5.0 00:03:26.880 SYMLINK libspdk_vfio_user.so 00:03:26.880 LIB libspdk_util.a 00:03:27.136 SO libspdk_util.so.9.1 00:03:27.137 SYMLINK libspdk_util.so 00:03:27.394 CC lib/json/json_parse.o 00:03:27.394 CC lib/json/json_util.o 00:03:27.394 CC lib/idxd/idxd.o 00:03:27.394 CC lib/rdma_utils/rdma_utils.o 00:03:27.394 CC lib/env_dpdk/env.o 00:03:27.394 CC lib/json/json_write.o 00:03:27.394 CC lib/idxd/idxd_user.o 00:03:27.394 CC lib/vmd/vmd.o 00:03:27.394 CC lib/env_dpdk/memory.o 00:03:27.394 CC lib/rdma_provider/common.o 00:03:27.394 CC lib/idxd/idxd_kernel.o 00:03:27.394 CC lib/vmd/led.o 00:03:27.394 CC lib/env_dpdk/pci.o 00:03:27.394 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:27.394 CC lib/env_dpdk/init.o 00:03:27.394 CC lib/env_dpdk/threads.o 00:03:27.394 CC lib/conf/conf.o 00:03:27.394 CC lib/env_dpdk/pci_ioat.o 00:03:27.394 CC lib/env_dpdk/pci_virtio.o 00:03:27.394 CC lib/env_dpdk/pci_vmd.o 00:03:27.394 CC lib/env_dpdk/pci_idxd.o 00:03:27.394 CC lib/env_dpdk/pci_event.o 00:03:27.394 CC lib/env_dpdk/sigbus_handler.o 00:03:27.394 CC lib/env_dpdk/pci_dpdk.o 00:03:27.394 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:27.394 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:27.394 LIB libspdk_trace_parser.a 00:03:27.394 SO libspdk_trace_parser.so.5.0 00:03:27.652 SYMLINK libspdk_trace_parser.so 00:03:27.652 LIB libspdk_conf.a 00:03:27.652 LIB libspdk_rdma_provider.a 00:03:27.652 SO libspdk_conf.so.6.0 00:03:27.652 SO libspdk_rdma_provider.so.6.0 00:03:27.652 LIB libspdk_rdma_utils.a 00:03:27.652 LIB libspdk_json.a 00:03:27.652 SO libspdk_rdma_utils.so.1.0 00:03:27.652 SYMLINK libspdk_conf.so 00:03:27.652 SO libspdk_json.so.6.0 00:03:27.652 SYMLINK libspdk_rdma_provider.so 00:03:27.652 SYMLINK libspdk_rdma_utils.so 00:03:27.652 SYMLINK libspdk_json.so 00:03:27.909 CC lib/jsonrpc/jsonrpc_server.o 00:03:27.909 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:27.909 CC lib/jsonrpc/jsonrpc_client.o 00:03:27.909 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:27.909 LIB libspdk_idxd.a 00:03:27.909 SO libspdk_idxd.so.12.0 00:03:27.909 LIB libspdk_vmd.a 00:03:27.909 SYMLINK libspdk_idxd.so 00:03:27.909 SO libspdk_vmd.so.6.0 00:03:28.167 SYMLINK libspdk_vmd.so 00:03:28.167 LIB libspdk_jsonrpc.a 00:03:28.167 SO libspdk_jsonrpc.so.6.0 00:03:28.167 SYMLINK libspdk_jsonrpc.so 00:03:28.425 CC lib/rpc/rpc.o 00:03:28.681 LIB libspdk_rpc.a 00:03:28.682 SO libspdk_rpc.so.6.0 00:03:28.682 SYMLINK libspdk_rpc.so 00:03:28.938 CC lib/notify/notify.o 00:03:28.938 CC lib/trace/trace.o 00:03:28.938 CC lib/keyring/keyring.o 00:03:28.938 CC lib/notify/notify_rpc.o 00:03:28.938 CC lib/trace/trace_flags.o 00:03:28.938 CC lib/keyring/keyring_rpc.o 00:03:28.938 CC lib/trace/trace_rpc.o 00:03:28.938 LIB libspdk_notify.a 00:03:28.938 SO libspdk_notify.so.6.0 00:03:29.195 LIB libspdk_keyring.a 00:03:29.195 SYMLINK libspdk_notify.so 00:03:29.195 LIB libspdk_trace.a 00:03:29.195 SO libspdk_keyring.so.1.0 00:03:29.195 SO libspdk_trace.so.10.0 00:03:29.195 SYMLINK libspdk_keyring.so 00:03:29.195 SYMLINK libspdk_trace.so 00:03:29.453 CC lib/sock/sock.o 00:03:29.453 CC lib/sock/sock_rpc.o 00:03:29.453 CC lib/thread/thread.o 00:03:29.453 CC lib/thread/iobuf.o 00:03:29.453 LIB libspdk_env_dpdk.a 00:03:29.453 SO libspdk_env_dpdk.so.14.1 00:03:29.710 SYMLINK libspdk_env_dpdk.so 00:03:29.710 LIB libspdk_sock.a 00:03:29.710 SO libspdk_sock.so.10.0 00:03:29.968 SYMLINK libspdk_sock.so 00:03:29.968 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:29.968 CC lib/nvme/nvme_ctrlr.o 00:03:29.968 CC lib/nvme/nvme_fabric.o 00:03:29.968 CC lib/nvme/nvme_ns_cmd.o 00:03:29.968 CC lib/nvme/nvme_ns.o 00:03:29.968 CC lib/nvme/nvme_pcie_common.o 00:03:29.968 CC lib/nvme/nvme_pcie.o 00:03:29.968 CC lib/nvme/nvme_qpair.o 00:03:29.968 CC lib/nvme/nvme.o 00:03:29.968 CC lib/nvme/nvme_quirks.o 00:03:29.968 CC lib/nvme/nvme_transport.o 00:03:29.968 CC lib/nvme/nvme_discovery.o 00:03:29.968 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:29.968 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:29.968 CC lib/nvme/nvme_tcp.o 00:03:29.968 CC lib/nvme/nvme_opal.o 00:03:29.968 CC lib/nvme/nvme_io_msg.o 00:03:29.968 CC lib/nvme/nvme_poll_group.o 00:03:29.968 CC lib/nvme/nvme_zns.o 00:03:29.968 CC lib/nvme/nvme_stubs.o 00:03:29.968 CC lib/nvme/nvme_auth.o 00:03:29.968 CC lib/nvme/nvme_cuse.o 00:03:29.968 CC lib/nvme/nvme_vfio_user.o 00:03:29.968 CC lib/nvme/nvme_rdma.o 00:03:30.903 LIB libspdk_thread.a 00:03:30.903 SO libspdk_thread.so.10.1 00:03:31.161 SYMLINK libspdk_thread.so 00:03:31.161 CC lib/virtio/virtio.o 00:03:31.161 CC lib/accel/accel.o 00:03:31.161 CC lib/virtio/virtio_vhost_user.o 00:03:31.161 CC lib/accel/accel_rpc.o 00:03:31.161 CC lib/vfu_tgt/tgt_endpoint.o 00:03:31.161 CC lib/vfu_tgt/tgt_rpc.o 00:03:31.161 CC lib/accel/accel_sw.o 00:03:31.161 CC lib/virtio/virtio_vfio_user.o 00:03:31.161 CC lib/virtio/virtio_pci.o 00:03:31.161 CC lib/blob/blobstore.o 00:03:31.161 CC lib/init/json_config.o 00:03:31.161 CC lib/blob/request.o 00:03:31.161 CC lib/init/subsystem.o 00:03:31.161 CC lib/init/subsystem_rpc.o 00:03:31.161 CC lib/blob/zeroes.o 00:03:31.161 CC lib/init/rpc.o 00:03:31.161 CC lib/blob/blob_bs_dev.o 00:03:31.419 LIB libspdk_init.a 00:03:31.419 SO libspdk_init.so.5.0 00:03:31.419 LIB libspdk_virtio.a 00:03:31.677 LIB libspdk_vfu_tgt.a 00:03:31.677 SYMLINK libspdk_init.so 00:03:31.677 SO libspdk_vfu_tgt.so.3.0 00:03:31.677 SO libspdk_virtio.so.7.0 00:03:31.677 SYMLINK libspdk_vfu_tgt.so 00:03:31.677 SYMLINK libspdk_virtio.so 00:03:31.677 CC lib/event/app.o 00:03:31.677 CC lib/event/reactor.o 00:03:31.677 CC lib/event/log_rpc.o 00:03:31.677 CC lib/event/app_rpc.o 00:03:31.677 CC lib/event/scheduler_static.o 00:03:32.242 LIB libspdk_event.a 00:03:32.242 SO libspdk_event.so.14.0 00:03:32.242 LIB libspdk_accel.a 00:03:32.242 SYMLINK libspdk_event.so 00:03:32.242 SO libspdk_accel.so.15.1 00:03:32.242 SYMLINK libspdk_accel.so 00:03:32.242 LIB libspdk_nvme.a 00:03:32.500 SO libspdk_nvme.so.13.1 00:03:32.500 CC lib/bdev/bdev.o 00:03:32.500 CC lib/bdev/bdev_rpc.o 00:03:32.500 CC lib/bdev/bdev_zone.o 00:03:32.500 CC lib/bdev/part.o 00:03:32.500 CC lib/bdev/scsi_nvme.o 00:03:32.757 SYMLINK libspdk_nvme.so 00:03:34.130 LIB libspdk_blob.a 00:03:34.130 SO libspdk_blob.so.11.0 00:03:34.387 SYMLINK libspdk_blob.so 00:03:34.387 CC lib/blobfs/blobfs.o 00:03:34.387 CC lib/blobfs/tree.o 00:03:34.387 CC lib/lvol/lvol.o 00:03:34.952 LIB libspdk_bdev.a 00:03:34.952 SO libspdk_bdev.so.15.1 00:03:35.214 SYMLINK libspdk_bdev.so 00:03:35.214 LIB libspdk_blobfs.a 00:03:35.214 SO libspdk_blobfs.so.10.0 00:03:35.214 CC lib/nbd/nbd.o 00:03:35.214 CC lib/scsi/dev.o 00:03:35.214 CC lib/nvmf/ctrlr.o 00:03:35.214 CC lib/nbd/nbd_rpc.o 00:03:35.214 CC lib/ublk/ublk.o 00:03:35.214 CC lib/scsi/lun.o 00:03:35.214 CC lib/ftl/ftl_core.o 00:03:35.214 CC lib/nvmf/ctrlr_discovery.o 00:03:35.214 CC lib/ftl/ftl_init.o 00:03:35.214 CC lib/scsi/port.o 00:03:35.214 CC lib/ublk/ublk_rpc.o 00:03:35.214 CC lib/nvmf/ctrlr_bdev.o 00:03:35.214 CC lib/scsi/scsi.o 00:03:35.214 CC lib/ftl/ftl_layout.o 00:03:35.214 CC lib/nvmf/subsystem.o 00:03:35.214 CC lib/nvmf/nvmf.o 00:03:35.214 CC lib/ftl/ftl_io.o 00:03:35.214 CC lib/scsi/scsi_bdev.o 00:03:35.214 CC lib/ftl/ftl_debug.o 00:03:35.214 CC lib/scsi/scsi_rpc.o 00:03:35.214 CC lib/scsi/scsi_pr.o 00:03:35.214 CC lib/nvmf/nvmf_rpc.o 00:03:35.214 CC lib/ftl/ftl_sb.o 00:03:35.214 CC lib/ftl/ftl_l2p.o 00:03:35.214 CC lib/scsi/task.o 00:03:35.214 CC lib/nvmf/transport.o 00:03:35.214 CC lib/nvmf/tcp.o 00:03:35.214 CC lib/nvmf/stubs.o 00:03:35.214 CC lib/ftl/ftl_l2p_flat.o 00:03:35.214 CC lib/ftl/ftl_nv_cache.o 00:03:35.214 CC lib/nvmf/mdns_server.o 00:03:35.214 CC lib/ftl/ftl_band.o 00:03:35.214 CC lib/nvmf/vfio_user.o 00:03:35.214 CC lib/ftl/ftl_band_ops.o 00:03:35.214 CC lib/nvmf/rdma.o 00:03:35.214 CC lib/ftl/ftl_writer.o 00:03:35.214 CC lib/nvmf/auth.o 00:03:35.214 CC lib/ftl/ftl_rq.o 00:03:35.214 CC lib/ftl/ftl_reloc.o 00:03:35.214 CC lib/ftl/ftl_l2p_cache.o 00:03:35.214 CC lib/ftl/ftl_p2l.o 00:03:35.214 CC lib/ftl/mngt/ftl_mngt.o 00:03:35.214 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:35.214 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:35.214 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:35.214 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:35.481 SYMLINK libspdk_blobfs.so 00:03:35.481 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:35.481 LIB libspdk_lvol.a 00:03:35.481 SO libspdk_lvol.so.10.0 00:03:35.744 SYMLINK libspdk_lvol.so 00:03:35.744 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:35.744 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:35.744 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:35.744 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:35.744 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:35.744 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:35.744 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:35.744 CC lib/ftl/utils/ftl_conf.o 00:03:35.744 CC lib/ftl/utils/ftl_md.o 00:03:35.744 CC lib/ftl/utils/ftl_mempool.o 00:03:35.744 CC lib/ftl/utils/ftl_bitmap.o 00:03:35.744 CC lib/ftl/utils/ftl_property.o 00:03:35.744 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:35.744 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:35.744 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:35.744 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:35.744 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:35.744 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:35.744 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:36.004 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:36.004 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:36.004 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:36.004 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:36.004 CC lib/ftl/base/ftl_base_dev.o 00:03:36.004 CC lib/ftl/base/ftl_base_bdev.o 00:03:36.004 CC lib/ftl/ftl_trace.o 00:03:36.004 LIB libspdk_nbd.a 00:03:36.262 SO libspdk_nbd.so.7.0 00:03:36.262 LIB libspdk_scsi.a 00:03:36.262 SYMLINK libspdk_nbd.so 00:03:36.262 SO libspdk_scsi.so.9.0 00:03:36.262 LIB libspdk_ublk.a 00:03:36.262 SO libspdk_ublk.so.3.0 00:03:36.262 SYMLINK libspdk_scsi.so 00:03:36.519 SYMLINK libspdk_ublk.so 00:03:36.519 CC lib/vhost/vhost.o 00:03:36.519 CC lib/iscsi/conn.o 00:03:36.519 CC lib/iscsi/init_grp.o 00:03:36.519 CC lib/iscsi/iscsi.o 00:03:36.519 CC lib/vhost/vhost_rpc.o 00:03:36.519 CC lib/iscsi/md5.o 00:03:36.519 CC lib/vhost/vhost_scsi.o 00:03:36.519 CC lib/iscsi/param.o 00:03:36.519 CC lib/vhost/vhost_blk.o 00:03:36.519 CC lib/iscsi/portal_grp.o 00:03:36.519 CC lib/vhost/rte_vhost_user.o 00:03:36.519 CC lib/iscsi/tgt_node.o 00:03:36.519 CC lib/iscsi/iscsi_subsystem.o 00:03:36.519 CC lib/iscsi/iscsi_rpc.o 00:03:36.519 CC lib/iscsi/task.o 00:03:36.777 LIB libspdk_ftl.a 00:03:36.777 SO libspdk_ftl.so.9.0 00:03:37.341 SYMLINK libspdk_ftl.so 00:03:37.906 LIB libspdk_vhost.a 00:03:37.906 SO libspdk_vhost.so.8.0 00:03:37.906 SYMLINK libspdk_vhost.so 00:03:37.906 LIB libspdk_nvmf.a 00:03:37.906 LIB libspdk_iscsi.a 00:03:37.906 SO libspdk_nvmf.so.18.1 00:03:37.906 SO libspdk_iscsi.so.8.0 00:03:38.164 SYMLINK libspdk_iscsi.so 00:03:38.164 SYMLINK libspdk_nvmf.so 00:03:38.421 CC module/env_dpdk/env_dpdk_rpc.o 00:03:38.421 CC module/vfu_device/vfu_virtio.o 00:03:38.421 CC module/vfu_device/vfu_virtio_blk.o 00:03:38.421 CC module/vfu_device/vfu_virtio_scsi.o 00:03:38.421 CC module/vfu_device/vfu_virtio_rpc.o 00:03:38.421 CC module/accel/error/accel_error.o 00:03:38.421 CC module/accel/error/accel_error_rpc.o 00:03:38.421 CC module/sock/posix/posix.o 00:03:38.421 CC module/keyring/file/keyring.o 00:03:38.421 CC module/keyring/file/keyring_rpc.o 00:03:38.421 CC module/scheduler/gscheduler/gscheduler.o 00:03:38.421 CC module/blob/bdev/blob_bdev.o 00:03:38.421 CC module/accel/ioat/accel_ioat.o 00:03:38.421 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:38.421 CC module/accel/dsa/accel_dsa.o 00:03:38.421 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:38.421 CC module/accel/ioat/accel_ioat_rpc.o 00:03:38.421 CC module/accel/dsa/accel_dsa_rpc.o 00:03:38.421 CC module/keyring/linux/keyring.o 00:03:38.421 CC module/accel/iaa/accel_iaa.o 00:03:38.421 CC module/keyring/linux/keyring_rpc.o 00:03:38.421 CC module/accel/iaa/accel_iaa_rpc.o 00:03:38.677 LIB libspdk_env_dpdk_rpc.a 00:03:38.677 SO libspdk_env_dpdk_rpc.so.6.0 00:03:38.677 SYMLINK libspdk_env_dpdk_rpc.so 00:03:38.677 LIB libspdk_keyring_linux.a 00:03:38.677 LIB libspdk_keyring_file.a 00:03:38.677 LIB libspdk_scheduler_gscheduler.a 00:03:38.677 LIB libspdk_scheduler_dpdk_governor.a 00:03:38.678 SO libspdk_keyring_file.so.1.0 00:03:38.678 SO libspdk_keyring_linux.so.1.0 00:03:38.678 LIB libspdk_accel_error.a 00:03:38.678 SO libspdk_scheduler_gscheduler.so.4.0 00:03:38.678 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:38.678 LIB libspdk_accel_ioat.a 00:03:38.678 LIB libspdk_scheduler_dynamic.a 00:03:38.678 SO libspdk_accel_error.so.2.0 00:03:38.678 LIB libspdk_accel_iaa.a 00:03:38.678 SO libspdk_accel_ioat.so.6.0 00:03:38.678 SO libspdk_scheduler_dynamic.so.4.0 00:03:38.935 SYMLINK libspdk_keyring_file.so 00:03:38.935 SYMLINK libspdk_keyring_linux.so 00:03:38.935 SYMLINK libspdk_scheduler_gscheduler.so 00:03:38.935 SO libspdk_accel_iaa.so.3.0 00:03:38.935 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:38.935 SYMLINK libspdk_accel_error.so 00:03:38.935 LIB libspdk_accel_dsa.a 00:03:38.935 LIB libspdk_blob_bdev.a 00:03:38.935 SYMLINK libspdk_accel_ioat.so 00:03:38.935 SYMLINK libspdk_scheduler_dynamic.so 00:03:38.935 SO libspdk_accel_dsa.so.5.0 00:03:38.935 SO libspdk_blob_bdev.so.11.0 00:03:38.935 SYMLINK libspdk_accel_iaa.so 00:03:38.935 SYMLINK libspdk_blob_bdev.so 00:03:38.935 SYMLINK libspdk_accel_dsa.so 00:03:39.202 LIB libspdk_vfu_device.a 00:03:39.202 SO libspdk_vfu_device.so.3.0 00:03:39.202 CC module/bdev/error/vbdev_error.o 00:03:39.202 CC module/bdev/error/vbdev_error_rpc.o 00:03:39.202 CC module/bdev/null/bdev_null.o 00:03:39.202 CC module/blobfs/bdev/blobfs_bdev.o 00:03:39.202 CC module/bdev/malloc/bdev_malloc.o 00:03:39.202 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:39.202 CC module/bdev/raid/bdev_raid.o 00:03:39.202 CC module/bdev/split/vbdev_split.o 00:03:39.202 CC module/bdev/null/bdev_null_rpc.o 00:03:39.202 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:39.202 CC module/bdev/split/vbdev_split_rpc.o 00:03:39.202 CC module/bdev/delay/vbdev_delay.o 00:03:39.202 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:39.202 CC module/bdev/raid/bdev_raid_rpc.o 00:03:39.202 CC module/bdev/nvme/bdev_nvme.o 00:03:39.202 CC module/bdev/passthru/vbdev_passthru.o 00:03:39.202 CC module/bdev/gpt/gpt.o 00:03:39.202 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:39.202 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:39.202 CC module/bdev/raid/bdev_raid_sb.o 00:03:39.202 CC module/bdev/lvol/vbdev_lvol.o 00:03:39.202 CC module/bdev/gpt/vbdev_gpt.o 00:03:39.202 CC module/bdev/nvme/nvme_rpc.o 00:03:39.202 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:39.202 CC module/bdev/raid/raid0.o 00:03:39.202 CC module/bdev/aio/bdev_aio.o 00:03:39.202 CC module/bdev/nvme/bdev_mdns_client.o 00:03:39.202 CC module/bdev/ftl/bdev_ftl.o 00:03:39.202 CC module/bdev/raid/raid1.o 00:03:39.202 CC module/bdev/nvme/vbdev_opal.o 00:03:39.202 CC module/bdev/aio/bdev_aio_rpc.o 00:03:39.202 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:39.202 CC module/bdev/raid/concat.o 00:03:39.202 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:39.202 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:39.202 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:39.202 CC module/bdev/iscsi/bdev_iscsi.o 00:03:39.202 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:39.202 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:39.202 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:39.202 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:39.202 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:39.202 SYMLINK libspdk_vfu_device.so 00:03:39.459 LIB libspdk_bdev_error.a 00:03:39.459 LIB libspdk_blobfs_bdev.a 00:03:39.460 SO libspdk_bdev_error.so.6.0 00:03:39.460 LIB libspdk_sock_posix.a 00:03:39.460 SO libspdk_blobfs_bdev.so.6.0 00:03:39.718 SO libspdk_sock_posix.so.6.0 00:03:39.718 SYMLINK libspdk_bdev_error.so 00:03:39.718 LIB libspdk_bdev_split.a 00:03:39.718 SYMLINK libspdk_blobfs_bdev.so 00:03:39.718 LIB libspdk_bdev_null.a 00:03:39.718 SO libspdk_bdev_split.so.6.0 00:03:39.718 LIB libspdk_bdev_ftl.a 00:03:39.718 SYMLINK libspdk_sock_posix.so 00:03:39.718 LIB libspdk_bdev_gpt.a 00:03:39.718 SO libspdk_bdev_null.so.6.0 00:03:39.718 SO libspdk_bdev_gpt.so.6.0 00:03:39.718 SO libspdk_bdev_ftl.so.6.0 00:03:39.718 LIB libspdk_bdev_aio.a 00:03:39.718 SYMLINK libspdk_bdev_split.so 00:03:39.718 LIB libspdk_bdev_iscsi.a 00:03:39.718 SO libspdk_bdev_aio.so.6.0 00:03:39.718 SYMLINK libspdk_bdev_null.so 00:03:39.718 LIB libspdk_bdev_delay.a 00:03:39.718 SO libspdk_bdev_iscsi.so.6.0 00:03:39.718 LIB libspdk_bdev_passthru.a 00:03:39.718 SYMLINK libspdk_bdev_gpt.so 00:03:39.718 SYMLINK libspdk_bdev_ftl.so 00:03:39.718 LIB libspdk_bdev_malloc.a 00:03:39.718 LIB libspdk_bdev_virtio.a 00:03:39.718 SO libspdk_bdev_delay.so.6.0 00:03:39.718 SO libspdk_bdev_passthru.so.6.0 00:03:39.718 LIB libspdk_bdev_zone_block.a 00:03:39.718 SYMLINK libspdk_bdev_aio.so 00:03:39.718 SO libspdk_bdev_malloc.so.6.0 00:03:39.718 SO libspdk_bdev_virtio.so.6.0 00:03:39.718 SYMLINK libspdk_bdev_iscsi.so 00:03:39.718 SO libspdk_bdev_zone_block.so.6.0 00:03:39.718 SYMLINK libspdk_bdev_delay.so 00:03:39.975 SYMLINK libspdk_bdev_passthru.so 00:03:39.975 SYMLINK libspdk_bdev_malloc.so 00:03:39.975 SYMLINK libspdk_bdev_zone_block.so 00:03:39.975 SYMLINK libspdk_bdev_virtio.so 00:03:39.975 LIB libspdk_bdev_lvol.a 00:03:39.975 SO libspdk_bdev_lvol.so.6.0 00:03:39.975 SYMLINK libspdk_bdev_lvol.so 00:03:40.233 LIB libspdk_bdev_raid.a 00:03:40.490 SO libspdk_bdev_raid.so.6.0 00:03:40.490 SYMLINK libspdk_bdev_raid.so 00:03:41.863 LIB libspdk_bdev_nvme.a 00:03:41.863 SO libspdk_bdev_nvme.so.7.0 00:03:41.863 SYMLINK libspdk_bdev_nvme.so 00:03:42.119 CC module/event/subsystems/vmd/vmd.o 00:03:42.119 CC module/event/subsystems/sock/sock.o 00:03:42.119 CC module/event/subsystems/iobuf/iobuf.o 00:03:42.119 CC module/event/subsystems/scheduler/scheduler.o 00:03:42.119 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:42.119 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:42.119 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:42.119 CC module/event/subsystems/keyring/keyring.o 00:03:42.119 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:42.119 LIB libspdk_event_keyring.a 00:03:42.119 LIB libspdk_event_vhost_blk.a 00:03:42.119 LIB libspdk_event_vfu_tgt.a 00:03:42.119 LIB libspdk_event_vmd.a 00:03:42.119 LIB libspdk_event_scheduler.a 00:03:42.119 LIB libspdk_event_sock.a 00:03:42.376 LIB libspdk_event_iobuf.a 00:03:42.376 SO libspdk_event_keyring.so.1.0 00:03:42.376 SO libspdk_event_vhost_blk.so.3.0 00:03:42.376 SO libspdk_event_sock.so.5.0 00:03:42.376 SO libspdk_event_scheduler.so.4.0 00:03:42.376 SO libspdk_event_vfu_tgt.so.3.0 00:03:42.376 SO libspdk_event_vmd.so.6.0 00:03:42.376 SO libspdk_event_iobuf.so.3.0 00:03:42.376 SYMLINK libspdk_event_keyring.so 00:03:42.376 SYMLINK libspdk_event_vhost_blk.so 00:03:42.376 SYMLINK libspdk_event_sock.so 00:03:42.376 SYMLINK libspdk_event_scheduler.so 00:03:42.376 SYMLINK libspdk_event_vfu_tgt.so 00:03:42.376 SYMLINK libspdk_event_vmd.so 00:03:42.376 SYMLINK libspdk_event_iobuf.so 00:03:42.633 CC module/event/subsystems/accel/accel.o 00:03:42.633 LIB libspdk_event_accel.a 00:03:42.633 SO libspdk_event_accel.so.6.0 00:03:42.633 SYMLINK libspdk_event_accel.so 00:03:42.890 CC module/event/subsystems/bdev/bdev.o 00:03:43.148 LIB libspdk_event_bdev.a 00:03:43.148 SO libspdk_event_bdev.so.6.0 00:03:43.148 SYMLINK libspdk_event_bdev.so 00:03:43.405 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:43.405 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:43.405 CC module/event/subsystems/ublk/ublk.o 00:03:43.405 CC module/event/subsystems/scsi/scsi.o 00:03:43.405 CC module/event/subsystems/nbd/nbd.o 00:03:43.405 LIB libspdk_event_nbd.a 00:03:43.405 LIB libspdk_event_ublk.a 00:03:43.405 LIB libspdk_event_scsi.a 00:03:43.405 SO libspdk_event_nbd.so.6.0 00:03:43.405 SO libspdk_event_ublk.so.3.0 00:03:43.405 SO libspdk_event_scsi.so.6.0 00:03:43.662 SYMLINK libspdk_event_ublk.so 00:03:43.662 SYMLINK libspdk_event_nbd.so 00:03:43.662 LIB libspdk_event_nvmf.a 00:03:43.662 SYMLINK libspdk_event_scsi.so 00:03:43.662 SO libspdk_event_nvmf.so.6.0 00:03:43.662 SYMLINK libspdk_event_nvmf.so 00:03:43.662 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:43.662 CC module/event/subsystems/iscsi/iscsi.o 00:03:43.921 LIB libspdk_event_vhost_scsi.a 00:03:43.921 SO libspdk_event_vhost_scsi.so.3.0 00:03:43.921 LIB libspdk_event_iscsi.a 00:03:43.921 SO libspdk_event_iscsi.so.6.0 00:03:43.921 SYMLINK libspdk_event_vhost_scsi.so 00:03:43.921 SYMLINK libspdk_event_iscsi.so 00:03:44.179 SO libspdk.so.6.0 00:03:44.179 SYMLINK libspdk.so 00:03:44.179 CC app/trace_record/trace_record.o 00:03:44.179 CXX app/trace/trace.o 00:03:44.179 TEST_HEADER include/spdk/accel.h 00:03:44.179 TEST_HEADER include/spdk/accel_module.h 00:03:44.179 CC app/spdk_lspci/spdk_lspci.o 00:03:44.179 TEST_HEADER include/spdk/assert.h 00:03:44.179 TEST_HEADER include/spdk/barrier.h 00:03:44.179 TEST_HEADER include/spdk/base64.h 00:03:44.179 CC app/spdk_nvme_identify/identify.o 00:03:44.179 TEST_HEADER include/spdk/bdev.h 00:03:44.179 CC app/spdk_top/spdk_top.o 00:03:44.179 TEST_HEADER include/spdk/bdev_module.h 00:03:44.179 CC app/spdk_nvme_discover/discovery_aer.o 00:03:44.179 TEST_HEADER include/spdk/bdev_zone.h 00:03:44.179 TEST_HEADER include/spdk/bit_array.h 00:03:44.179 CC app/spdk_nvme_perf/perf.o 00:03:44.444 TEST_HEADER include/spdk/bit_pool.h 00:03:44.444 TEST_HEADER include/spdk/blob_bdev.h 00:03:44.444 TEST_HEADER include/spdk/blobfs.h 00:03:44.444 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:44.444 TEST_HEADER include/spdk/blob.h 00:03:44.444 TEST_HEADER include/spdk/conf.h 00:03:44.444 TEST_HEADER include/spdk/config.h 00:03:44.444 TEST_HEADER include/spdk/cpuset.h 00:03:44.444 CC test/rpc_client/rpc_client_test.o 00:03:44.444 TEST_HEADER include/spdk/crc16.h 00:03:44.444 TEST_HEADER include/spdk/crc32.h 00:03:44.444 TEST_HEADER include/spdk/crc64.h 00:03:44.444 TEST_HEADER include/spdk/dif.h 00:03:44.444 TEST_HEADER include/spdk/dma.h 00:03:44.444 TEST_HEADER include/spdk/endian.h 00:03:44.444 TEST_HEADER include/spdk/env_dpdk.h 00:03:44.444 TEST_HEADER include/spdk/env.h 00:03:44.444 TEST_HEADER include/spdk/event.h 00:03:44.444 TEST_HEADER include/spdk/fd_group.h 00:03:44.445 TEST_HEADER include/spdk/fd.h 00:03:44.445 TEST_HEADER include/spdk/file.h 00:03:44.445 TEST_HEADER include/spdk/ftl.h 00:03:44.445 TEST_HEADER include/spdk/gpt_spec.h 00:03:44.445 TEST_HEADER include/spdk/hexlify.h 00:03:44.445 TEST_HEADER include/spdk/histogram_data.h 00:03:44.445 TEST_HEADER include/spdk/idxd.h 00:03:44.445 TEST_HEADER include/spdk/idxd_spec.h 00:03:44.445 TEST_HEADER include/spdk/init.h 00:03:44.445 TEST_HEADER include/spdk/ioat_spec.h 00:03:44.445 TEST_HEADER include/spdk/ioat.h 00:03:44.445 TEST_HEADER include/spdk/iscsi_spec.h 00:03:44.445 TEST_HEADER include/spdk/json.h 00:03:44.445 TEST_HEADER include/spdk/keyring.h 00:03:44.445 TEST_HEADER include/spdk/jsonrpc.h 00:03:44.445 TEST_HEADER include/spdk/keyring_module.h 00:03:44.445 TEST_HEADER include/spdk/likely.h 00:03:44.445 TEST_HEADER include/spdk/log.h 00:03:44.445 TEST_HEADER include/spdk/lvol.h 00:03:44.445 TEST_HEADER include/spdk/memory.h 00:03:44.445 TEST_HEADER include/spdk/mmio.h 00:03:44.445 TEST_HEADER include/spdk/nbd.h 00:03:44.445 TEST_HEADER include/spdk/notify.h 00:03:44.445 TEST_HEADER include/spdk/nvme.h 00:03:44.445 TEST_HEADER include/spdk/nvme_intel.h 00:03:44.445 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:44.445 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:44.445 TEST_HEADER include/spdk/nvme_spec.h 00:03:44.445 TEST_HEADER include/spdk/nvme_zns.h 00:03:44.445 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:44.445 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:44.445 TEST_HEADER include/spdk/nvmf.h 00:03:44.445 TEST_HEADER include/spdk/nvmf_spec.h 00:03:44.445 TEST_HEADER include/spdk/nvmf_transport.h 00:03:44.445 TEST_HEADER include/spdk/opal_spec.h 00:03:44.445 TEST_HEADER include/spdk/opal.h 00:03:44.445 TEST_HEADER include/spdk/pci_ids.h 00:03:44.445 TEST_HEADER include/spdk/pipe.h 00:03:44.445 TEST_HEADER include/spdk/queue.h 00:03:44.445 TEST_HEADER include/spdk/reduce.h 00:03:44.445 TEST_HEADER include/spdk/rpc.h 00:03:44.445 TEST_HEADER include/spdk/scheduler.h 00:03:44.445 TEST_HEADER include/spdk/scsi.h 00:03:44.445 TEST_HEADER include/spdk/scsi_spec.h 00:03:44.445 TEST_HEADER include/spdk/sock.h 00:03:44.445 TEST_HEADER include/spdk/stdinc.h 00:03:44.445 TEST_HEADER include/spdk/string.h 00:03:44.445 TEST_HEADER include/spdk/thread.h 00:03:44.445 TEST_HEADER include/spdk/trace_parser.h 00:03:44.445 TEST_HEADER include/spdk/trace.h 00:03:44.445 TEST_HEADER include/spdk/tree.h 00:03:44.445 TEST_HEADER include/spdk/ublk.h 00:03:44.445 TEST_HEADER include/spdk/util.h 00:03:44.445 TEST_HEADER include/spdk/uuid.h 00:03:44.445 TEST_HEADER include/spdk/version.h 00:03:44.445 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:44.445 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:44.445 TEST_HEADER include/spdk/vhost.h 00:03:44.445 TEST_HEADER include/spdk/vmd.h 00:03:44.445 TEST_HEADER include/spdk/xor.h 00:03:44.445 TEST_HEADER include/spdk/zipf.h 00:03:44.445 CXX test/cpp_headers/accel.o 00:03:44.445 CXX test/cpp_headers/accel_module.o 00:03:44.445 CXX test/cpp_headers/assert.o 00:03:44.445 CXX test/cpp_headers/barrier.o 00:03:44.445 CXX test/cpp_headers/base64.o 00:03:44.445 CXX test/cpp_headers/bdev.o 00:03:44.445 CXX test/cpp_headers/bdev_module.o 00:03:44.445 CXX test/cpp_headers/bdev_zone.o 00:03:44.445 CXX test/cpp_headers/bit_array.o 00:03:44.445 CXX test/cpp_headers/bit_pool.o 00:03:44.445 CC app/spdk_dd/spdk_dd.o 00:03:44.445 CXX test/cpp_headers/blob_bdev.o 00:03:44.445 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:44.445 CXX test/cpp_headers/blobfs_bdev.o 00:03:44.445 CXX test/cpp_headers/blobfs.o 00:03:44.445 CXX test/cpp_headers/blob.o 00:03:44.445 CXX test/cpp_headers/conf.o 00:03:44.445 CXX test/cpp_headers/config.o 00:03:44.445 CXX test/cpp_headers/cpuset.o 00:03:44.445 CXX test/cpp_headers/crc16.o 00:03:44.445 CC app/nvmf_tgt/nvmf_main.o 00:03:44.445 CC app/iscsi_tgt/iscsi_tgt.o 00:03:44.445 CXX test/cpp_headers/crc32.o 00:03:44.445 CC examples/ioat/perf/perf.o 00:03:44.445 CC app/spdk_tgt/spdk_tgt.o 00:03:44.445 CC examples/util/zipf/zipf.o 00:03:44.445 CC examples/ioat/verify/verify.o 00:03:44.445 CC test/thread/poller_perf/poller_perf.o 00:03:44.445 CC test/app/histogram_perf/histogram_perf.o 00:03:44.445 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:44.445 CC test/env/vtophys/vtophys.o 00:03:44.445 CC test/app/jsoncat/jsoncat.o 00:03:44.445 CC test/app/stub/stub.o 00:03:44.445 CC test/env/memory/memory_ut.o 00:03:44.445 CC test/env/pci/pci_ut.o 00:03:44.445 CC app/fio/nvme/fio_plugin.o 00:03:44.445 CC test/dma/test_dma/test_dma.o 00:03:44.445 CC test/app/bdev_svc/bdev_svc.o 00:03:44.445 CC app/fio/bdev/fio_plugin.o 00:03:44.711 LINK spdk_lspci 00:03:44.711 CC test/env/mem_callbacks/mem_callbacks.o 00:03:44.711 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:44.711 LINK rpc_client_test 00:03:44.711 LINK spdk_nvme_discover 00:03:44.711 LINK jsoncat 00:03:44.711 CXX test/cpp_headers/crc64.o 00:03:44.711 LINK vtophys 00:03:44.711 LINK zipf 00:03:44.711 CXX test/cpp_headers/dif.o 00:03:44.711 LINK interrupt_tgt 00:03:44.711 LINK histogram_perf 00:03:44.711 LINK poller_perf 00:03:44.711 CXX test/cpp_headers/dma.o 00:03:44.711 LINK nvmf_tgt 00:03:44.711 CXX test/cpp_headers/endian.o 00:03:44.711 LINK spdk_trace_record 00:03:44.711 CXX test/cpp_headers/env_dpdk.o 00:03:44.711 CXX test/cpp_headers/env.o 00:03:44.711 CXX test/cpp_headers/event.o 00:03:44.979 CXX test/cpp_headers/fd_group.o 00:03:44.979 CXX test/cpp_headers/fd.o 00:03:44.979 LINK env_dpdk_post_init 00:03:44.979 CXX test/cpp_headers/file.o 00:03:44.979 LINK stub 00:03:44.979 CXX test/cpp_headers/ftl.o 00:03:44.979 LINK iscsi_tgt 00:03:44.979 CXX test/cpp_headers/gpt_spec.o 00:03:44.979 CXX test/cpp_headers/hexlify.o 00:03:44.979 CXX test/cpp_headers/histogram_data.o 00:03:44.979 CXX test/cpp_headers/idxd.o 00:03:44.979 LINK spdk_tgt 00:03:44.979 CXX test/cpp_headers/idxd_spec.o 00:03:44.979 LINK ioat_perf 00:03:44.979 LINK bdev_svc 00:03:44.979 LINK verify 00:03:44.979 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:44.979 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:44.979 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:44.979 CXX test/cpp_headers/init.o 00:03:44.979 CXX test/cpp_headers/ioat.o 00:03:44.979 LINK spdk_dd 00:03:44.979 CXX test/cpp_headers/ioat_spec.o 00:03:45.288 CXX test/cpp_headers/iscsi_spec.o 00:03:45.288 LINK spdk_trace 00:03:45.288 CXX test/cpp_headers/json.o 00:03:45.288 CXX test/cpp_headers/jsonrpc.o 00:03:45.288 CXX test/cpp_headers/keyring.o 00:03:45.288 CXX test/cpp_headers/keyring_module.o 00:03:45.288 CXX test/cpp_headers/likely.o 00:03:45.288 CXX test/cpp_headers/log.o 00:03:45.288 CXX test/cpp_headers/lvol.o 00:03:45.288 CXX test/cpp_headers/memory.o 00:03:45.288 CXX test/cpp_headers/mmio.o 00:03:45.288 CXX test/cpp_headers/nbd.o 00:03:45.288 CXX test/cpp_headers/notify.o 00:03:45.288 CXX test/cpp_headers/nvme.o 00:03:45.288 CXX test/cpp_headers/nvme_intel.o 00:03:45.288 CXX test/cpp_headers/nvme_ocssd.o 00:03:45.288 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:45.288 CXX test/cpp_headers/nvme_spec.o 00:03:45.288 LINK pci_ut 00:03:45.288 CXX test/cpp_headers/nvme_zns.o 00:03:45.288 CXX test/cpp_headers/nvmf_cmd.o 00:03:45.288 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:45.288 CXX test/cpp_headers/nvmf.o 00:03:45.288 CXX test/cpp_headers/nvmf_spec.o 00:03:45.288 CXX test/cpp_headers/nvmf_transport.o 00:03:45.288 LINK test_dma 00:03:45.288 CXX test/cpp_headers/opal.o 00:03:45.288 CXX test/cpp_headers/opal_spec.o 00:03:45.550 LINK nvme_fuzz 00:03:45.550 CC test/event/event_perf/event_perf.o 00:03:45.550 CC examples/sock/hello_world/hello_sock.o 00:03:45.550 CC test/event/reactor/reactor.o 00:03:45.550 CXX test/cpp_headers/pci_ids.o 00:03:45.550 CXX test/cpp_headers/pipe.o 00:03:45.550 CXX test/cpp_headers/queue.o 00:03:45.550 CC test/event/reactor_perf/reactor_perf.o 00:03:45.550 CXX test/cpp_headers/reduce.o 00:03:45.550 CC examples/idxd/perf/perf.o 00:03:45.550 CXX test/cpp_headers/rpc.o 00:03:45.550 LINK spdk_nvme 00:03:45.550 CC examples/vmd/lsvmd/lsvmd.o 00:03:45.550 CXX test/cpp_headers/scheduler.o 00:03:45.550 CXX test/cpp_headers/scsi.o 00:03:45.550 CC examples/vmd/led/led.o 00:03:45.550 CXX test/cpp_headers/scsi_spec.o 00:03:45.550 CC test/event/app_repeat/app_repeat.o 00:03:45.550 CXX test/cpp_headers/sock.o 00:03:45.550 CC examples/thread/thread/thread_ex.o 00:03:45.550 CXX test/cpp_headers/stdinc.o 00:03:45.550 LINK spdk_bdev 00:03:45.550 CXX test/cpp_headers/string.o 00:03:45.550 CXX test/cpp_headers/thread.o 00:03:45.808 CC test/event/scheduler/scheduler.o 00:03:45.808 CXX test/cpp_headers/trace.o 00:03:45.808 CXX test/cpp_headers/trace_parser.o 00:03:45.808 CXX test/cpp_headers/tree.o 00:03:45.808 CXX test/cpp_headers/ublk.o 00:03:45.808 CXX test/cpp_headers/util.o 00:03:45.808 CXX test/cpp_headers/uuid.o 00:03:45.808 CXX test/cpp_headers/version.o 00:03:45.808 CXX test/cpp_headers/vfio_user_pci.o 00:03:45.808 CXX test/cpp_headers/vfio_user_spec.o 00:03:45.808 CXX test/cpp_headers/vhost.o 00:03:45.808 CXX test/cpp_headers/vmd.o 00:03:45.808 CXX test/cpp_headers/xor.o 00:03:45.808 LINK vhost_fuzz 00:03:45.808 CXX test/cpp_headers/zipf.o 00:03:45.808 LINK mem_callbacks 00:03:45.808 CC app/vhost/vhost.o 00:03:45.808 LINK spdk_nvme_perf 00:03:45.808 LINK reactor 00:03:45.808 LINK event_perf 00:03:45.808 LINK reactor_perf 00:03:45.808 LINK lsvmd 00:03:45.808 LINK led 00:03:45.808 LINK app_repeat 00:03:46.065 LINK spdk_top 00:03:46.066 LINK hello_sock 00:03:46.066 LINK spdk_nvme_identify 00:03:46.066 CC test/nvme/sgl/sgl.o 00:03:46.066 CC test/nvme/e2edp/nvme_dp.o 00:03:46.066 CC test/nvme/reset/reset.o 00:03:46.066 CC test/nvme/aer/aer.o 00:03:46.066 CC test/nvme/reserve/reserve.o 00:03:46.066 CC test/nvme/overhead/overhead.o 00:03:46.066 CC test/accel/dif/dif.o 00:03:46.066 CC test/nvme/startup/startup.o 00:03:46.066 CC test/nvme/err_injection/err_injection.o 00:03:46.066 CC test/blobfs/mkfs/mkfs.o 00:03:46.066 CC test/nvme/simple_copy/simple_copy.o 00:03:46.066 LINK scheduler 00:03:46.066 CC test/nvme/boot_partition/boot_partition.o 00:03:46.066 CC test/nvme/connect_stress/connect_stress.o 00:03:46.066 CC test/nvme/compliance/nvme_compliance.o 00:03:46.066 LINK thread 00:03:46.066 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:46.066 CC test/nvme/fused_ordering/fused_ordering.o 00:03:46.066 CC test/nvme/fdp/fdp.o 00:03:46.066 CC test/lvol/esnap/esnap.o 00:03:46.324 CC test/nvme/cuse/cuse.o 00:03:46.324 LINK idxd_perf 00:03:46.324 LINK vhost 00:03:46.324 LINK err_injection 00:03:46.324 LINK connect_stress 00:03:46.324 LINK mkfs 00:03:46.324 LINK doorbell_aers 00:03:46.324 LINK startup 00:03:46.324 LINK reset 00:03:46.582 LINK boot_partition 00:03:46.582 LINK reserve 00:03:46.582 LINK aer 00:03:46.582 LINK overhead 00:03:46.582 CC examples/nvme/hotplug/hotplug.o 00:03:46.582 CC examples/nvme/reconnect/reconnect.o 00:03:46.582 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:46.582 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:46.582 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:46.583 CC examples/nvme/arbitration/arbitration.o 00:03:46.583 CC examples/nvme/hello_world/hello_world.o 00:03:46.583 CC examples/nvme/abort/abort.o 00:03:46.583 LINK memory_ut 00:03:46.583 LINK fused_ordering 00:03:46.583 LINK fdp 00:03:46.583 LINK nvme_dp 00:03:46.583 LINK simple_copy 00:03:46.583 LINK sgl 00:03:46.583 LINK nvme_compliance 00:03:46.841 CC examples/accel/perf/accel_perf.o 00:03:46.841 CC examples/blob/hello_world/hello_blob.o 00:03:46.841 CC examples/blob/cli/blobcli.o 00:03:46.841 LINK hello_world 00:03:46.841 LINK dif 00:03:46.841 LINK hotplug 00:03:46.841 LINK pmr_persistence 00:03:46.841 LINK cmb_copy 00:03:46.841 LINK abort 00:03:47.099 LINK arbitration 00:03:47.099 LINK reconnect 00:03:47.099 LINK hello_blob 00:03:47.099 LINK nvme_manage 00:03:47.099 CC test/bdev/bdevio/bdevio.o 00:03:47.099 LINK accel_perf 00:03:47.356 LINK blobcli 00:03:47.356 LINK iscsi_fuzz 00:03:47.614 CC examples/bdev/hello_world/hello_bdev.o 00:03:47.614 CC examples/bdev/bdevperf/bdevperf.o 00:03:47.614 LINK bdevio 00:03:47.871 LINK cuse 00:03:47.871 LINK hello_bdev 00:03:48.439 LINK bdevperf 00:03:48.698 CC examples/nvmf/nvmf/nvmf.o 00:03:48.956 LINK nvmf 00:03:51.487 LINK esnap 00:03:51.487 00:03:51.487 real 0m49.054s 00:03:51.487 user 10m10.865s 00:03:51.487 sys 2m27.811s 00:03:51.487 13:42:46 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:51.487 13:42:46 make -- common/autotest_common.sh@10 -- $ set +x 00:03:51.487 ************************************ 00:03:51.487 END TEST make 00:03:51.487 ************************************ 00:03:51.487 13:42:46 -- common/autotest_common.sh@1142 -- $ return 0 00:03:51.487 13:42:46 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:51.487 13:42:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:51.487 13:42:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:51.487 13:42:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.487 13:42:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:51.487 13:42:46 -- pm/common@44 -- $ pid=3545256 00:03:51.487 13:42:46 -- pm/common@50 -- $ kill -TERM 3545256 00:03:51.487 13:42:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.487 13:42:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:51.487 13:42:46 -- pm/common@44 -- $ pid=3545258 00:03:51.487 13:42:46 -- pm/common@50 -- $ kill -TERM 3545258 00:03:51.487 13:42:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.487 13:42:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:51.487 13:42:46 -- pm/common@44 -- $ pid=3545260 00:03:51.487 13:42:46 -- pm/common@50 -- $ kill -TERM 3545260 00:03:51.487 13:42:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.487 13:42:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:51.487 13:42:46 -- pm/common@44 -- $ pid=3545288 00:03:51.487 13:42:46 -- pm/common@50 -- $ sudo -E kill -TERM 3545288 00:03:51.745 13:42:46 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:51.745 13:42:46 -- nvmf/common.sh@7 -- # uname -s 00:03:51.745 13:42:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:51.745 13:42:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:51.745 13:42:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:51.745 13:42:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:51.745 13:42:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:51.745 13:42:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:51.745 13:42:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:51.745 13:42:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:51.745 13:42:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:51.745 13:42:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:51.745 13:42:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:03:51.745 13:42:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:03:51.745 13:42:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:51.745 13:42:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:51.745 13:42:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:51.745 13:42:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:51.745 13:42:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:51.745 13:42:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:51.745 13:42:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:51.745 13:42:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:51.745 13:42:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.745 13:42:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.745 13:42:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.745 13:42:46 -- paths/export.sh@5 -- # export PATH 00:03:51.746 13:42:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.746 13:42:46 -- nvmf/common.sh@47 -- # : 0 00:03:51.746 13:42:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:51.746 13:42:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:51.746 13:42:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:51.746 13:42:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:51.746 13:42:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:51.746 13:42:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:51.746 13:42:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:51.746 13:42:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:51.746 13:42:46 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:51.746 13:42:46 -- spdk/autotest.sh@32 -- # uname -s 00:03:51.746 13:42:46 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:51.746 13:42:46 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:51.746 13:42:46 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:51.746 13:42:46 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:51.746 13:42:46 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:51.746 13:42:46 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:51.746 13:42:46 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:51.746 13:42:46 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:51.746 13:42:46 -- spdk/autotest.sh@48 -- # udevadm_pid=3601366 00:03:51.746 13:42:46 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:51.746 13:42:46 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:51.746 13:42:46 -- pm/common@17 -- # local monitor 00:03:51.746 13:42:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.746 13:42:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.746 13:42:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.746 13:42:46 -- pm/common@21 -- # date +%s 00:03:51.746 13:42:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.746 13:42:46 -- pm/common@21 -- # date +%s 00:03:51.746 13:42:46 -- pm/common@25 -- # sleep 1 00:03:51.746 13:42:46 -- pm/common@21 -- # date +%s 00:03:51.746 13:42:46 -- pm/common@21 -- # date +%s 00:03:51.746 13:42:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721043766 00:03:51.746 13:42:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721043766 00:03:51.746 13:42:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721043766 00:03:51.746 13:42:46 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721043766 00:03:51.746 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721043766_collect-vmstat.pm.log 00:03:51.746 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721043766_collect-cpu-load.pm.log 00:03:51.746 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721043766_collect-cpu-temp.pm.log 00:03:51.746 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721043766_collect-bmc-pm.bmc.pm.log 00:03:52.683 13:42:47 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:52.683 13:42:47 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:52.683 13:42:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:52.683 13:42:47 -- common/autotest_common.sh@10 -- # set +x 00:03:52.683 13:42:47 -- spdk/autotest.sh@59 -- # create_test_list 00:03:52.683 13:42:47 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:52.683 13:42:47 -- common/autotest_common.sh@10 -- # set +x 00:03:52.683 13:42:47 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:52.683 13:42:47 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:52.683 13:42:47 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:52.683 13:42:47 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:52.683 13:42:47 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:52.683 13:42:47 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:52.683 13:42:47 -- common/autotest_common.sh@1455 -- # uname 00:03:52.683 13:42:47 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:52.683 13:42:47 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:52.683 13:42:47 -- common/autotest_common.sh@1475 -- # uname 00:03:52.683 13:42:47 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:52.683 13:42:47 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:52.683 13:42:47 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:52.683 13:42:47 -- spdk/autotest.sh@72 -- # hash lcov 00:03:52.683 13:42:47 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:52.683 13:42:47 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:52.683 --rc lcov_branch_coverage=1 00:03:52.683 --rc lcov_function_coverage=1 00:03:52.683 --rc genhtml_branch_coverage=1 00:03:52.683 --rc genhtml_function_coverage=1 00:03:52.683 --rc genhtml_legend=1 00:03:52.683 --rc geninfo_all_blocks=1 00:03:52.683 ' 00:03:52.683 13:42:47 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:52.683 --rc lcov_branch_coverage=1 00:03:52.683 --rc lcov_function_coverage=1 00:03:52.683 --rc genhtml_branch_coverage=1 00:03:52.683 --rc genhtml_function_coverage=1 00:03:52.683 --rc genhtml_legend=1 00:03:52.683 --rc geninfo_all_blocks=1 00:03:52.683 ' 00:03:52.683 13:42:47 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:52.683 --rc lcov_branch_coverage=1 00:03:52.683 --rc lcov_function_coverage=1 00:03:52.683 --rc genhtml_branch_coverage=1 00:03:52.683 --rc genhtml_function_coverage=1 00:03:52.683 --rc genhtml_legend=1 00:03:52.683 --rc geninfo_all_blocks=1 00:03:52.683 --no-external' 00:03:52.683 13:42:47 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:52.683 --rc lcov_branch_coverage=1 00:03:52.683 --rc lcov_function_coverage=1 00:03:52.683 --rc genhtml_branch_coverage=1 00:03:52.683 --rc genhtml_function_coverage=1 00:03:52.683 --rc genhtml_legend=1 00:03:52.683 --rc geninfo_all_blocks=1 00:03:52.683 --no-external' 00:03:52.683 13:42:47 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:52.940 lcov: LCOV version 1.14 00:03:52.940 13:42:47 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:07.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:07.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:22.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:22.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:22.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:22.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:22.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:22.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:22.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:22.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:22.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:22.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:22.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:22.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:22.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:22.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:22.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:22.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:22.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:22.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:22.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:22.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:22.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:22.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:22.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:22.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:22.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:22.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:22.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:22.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:22.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:22.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:22.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:22.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:22.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:22.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:22.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:22.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:22.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:22.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:22.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:22.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:22.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:22.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:22.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:22.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:22.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:22.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:22.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:22.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:22.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:22.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:22.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:22.671 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:22.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:22.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:22.672 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:22.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:22.673 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:22.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:22.673 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:22.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:22.673 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:22.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:22.673 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:22.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:22.673 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:22.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:22.673 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:25.951 13:43:20 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:25.952 13:43:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:25.952 13:43:20 -- common/autotest_common.sh@10 -- # set +x 00:04:25.952 13:43:20 -- spdk/autotest.sh@91 -- # rm -f 00:04:25.952 13:43:20 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:26.886 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:04:26.886 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:26.886 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:26.886 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:26.886 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:26.886 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:26.886 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:27.143 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:27.143 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:27.143 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:27.143 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:27.143 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:27.143 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:27.143 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:27.143 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:27.143 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:27.143 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:27.143 13:43:21 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:27.143 13:43:21 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:27.143 13:43:21 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:27.143 13:43:21 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:27.143 13:43:21 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:27.143 13:43:21 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:27.143 13:43:21 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:27.143 13:43:21 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:27.143 13:43:21 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:27.143 13:43:21 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:27.143 13:43:21 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:27.143 13:43:21 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:27.143 13:43:21 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:27.143 13:43:21 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:27.143 13:43:21 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:27.143 No valid GPT data, bailing 00:04:27.401 13:43:21 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:27.401 13:43:21 -- scripts/common.sh@391 -- # pt= 00:04:27.401 13:43:21 -- scripts/common.sh@392 -- # return 1 00:04:27.401 13:43:21 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:27.401 1+0 records in 00:04:27.401 1+0 records out 00:04:27.401 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00247936 s, 423 MB/s 00:04:27.401 13:43:21 -- spdk/autotest.sh@118 -- # sync 00:04:27.401 13:43:21 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:27.401 13:43:22 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:27.401 13:43:22 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:29.299 13:43:23 -- spdk/autotest.sh@124 -- # uname -s 00:04:29.299 13:43:23 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:29.299 13:43:23 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:29.299 13:43:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.299 13:43:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.299 13:43:23 -- common/autotest_common.sh@10 -- # set +x 00:04:29.299 ************************************ 00:04:29.299 START TEST setup.sh 00:04:29.299 ************************************ 00:04:29.299 13:43:23 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:29.299 * Looking for test storage... 00:04:29.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:29.299 13:43:23 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:29.299 13:43:23 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:29.299 13:43:23 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:29.299 13:43:23 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.299 13:43:23 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.299 13:43:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:29.299 ************************************ 00:04:29.299 START TEST acl 00:04:29.299 ************************************ 00:04:29.299 13:43:23 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:29.299 * Looking for test storage... 00:04:29.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:29.299 13:43:23 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:29.299 13:43:23 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:29.299 13:43:23 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:29.299 13:43:23 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:29.299 13:43:23 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:29.299 13:43:23 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:29.299 13:43:23 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:29.299 13:43:23 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:29.299 13:43:23 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:29.299 13:43:23 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:29.299 13:43:23 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:29.299 13:43:23 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:29.299 13:43:23 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:29.299 13:43:23 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:29.299 13:43:24 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.299 13:43:24 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:30.672 13:43:25 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:30.672 13:43:25 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:30.672 13:43:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.672 13:43:25 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:30.672 13:43:25 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.672 13:43:25 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:32.049 Hugepages 00:04:32.049 node hugesize free / total 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.049 00:04:32.049 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:82:00.0 == *:*:*.* ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:32.049 13:43:26 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:32.049 13:43:26 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.049 13:43:26 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.049 13:43:26 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:32.049 ************************************ 00:04:32.049 START TEST denied 00:04:32.049 ************************************ 00:04:32.049 13:43:26 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:32.049 13:43:26 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:82:00.0' 00:04:32.049 13:43:26 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:82:00.0' 00:04:32.049 13:43:26 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:32.049 13:43:26 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.049 13:43:26 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:33.976 0000:82:00.0 (8086 0a54): Skipping denied controller at 0000:82:00.0 00:04:33.976 13:43:28 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:82:00.0 00:04:33.976 13:43:28 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:33.976 13:43:28 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:33.976 13:43:28 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:82:00.0 ]] 00:04:33.976 13:43:28 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:82:00.0/driver 00:04:33.976 13:43:28 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:33.976 13:43:28 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:33.976 13:43:28 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:33.976 13:43:28 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:33.976 13:43:28 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:36.507 00:04:36.507 real 0m4.191s 00:04:36.507 user 0m1.177s 00:04:36.507 sys 0m2.055s 00:04:36.507 13:43:31 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.507 13:43:31 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:36.507 ************************************ 00:04:36.507 END TEST denied 00:04:36.507 ************************************ 00:04:36.507 13:43:31 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:36.507 13:43:31 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:36.507 13:43:31 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.507 13:43:31 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.507 13:43:31 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:36.507 ************************************ 00:04:36.507 START TEST allowed 00:04:36.507 ************************************ 00:04:36.507 13:43:31 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:36.507 13:43:31 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:82:00.0 00:04:36.507 13:43:31 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:36.507 13:43:31 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:82:00.0 .*: nvme -> .*' 00:04:36.507 13:43:31 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.507 13:43:31 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:39.039 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:04:39.039 13:43:33 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:39.039 13:43:33 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:39.039 13:43:33 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:39.039 13:43:33 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:39.039 13:43:33 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:40.412 00:04:40.412 real 0m4.021s 00:04:40.412 user 0m1.065s 00:04:40.412 sys 0m1.824s 00:04:40.412 13:43:35 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.412 13:43:35 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:40.412 ************************************ 00:04:40.412 END TEST allowed 00:04:40.412 ************************************ 00:04:40.412 13:43:35 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:40.412 00:04:40.412 real 0m11.153s 00:04:40.412 user 0m3.383s 00:04:40.412 sys 0m5.753s 00:04:40.412 13:43:35 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.412 13:43:35 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:40.412 ************************************ 00:04:40.412 END TEST acl 00:04:40.412 ************************************ 00:04:40.412 13:43:35 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:40.412 13:43:35 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:40.412 13:43:35 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.412 13:43:35 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.412 13:43:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:40.412 ************************************ 00:04:40.412 START TEST hugepages 00:04:40.412 ************************************ 00:04:40.412 13:43:35 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:40.412 * Looking for test storage... 00:04:40.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 27381564 kB' 'MemAvailable: 30952288 kB' 'Buffers: 2704 kB' 'Cached: 10020244 kB' 'SwapCached: 0 kB' 'Active: 7036496 kB' 'Inactive: 3505248 kB' 'Active(anon): 6646956 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522036 kB' 'Mapped: 210536 kB' 'Shmem: 6128160 kB' 'KReclaimable: 175520 kB' 'Slab: 516920 kB' 'SReclaimable: 175520 kB' 'SUnreclaim: 341400 kB' 'KernelStack: 12400 kB' 'PageTables: 8320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28304788 kB' 'Committed_AS: 7776164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195520 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1615452 kB' 'DirectMap2M: 16130048 kB' 'DirectMap1G: 34603008 kB' 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.412 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.413 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:40.414 13:43:35 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:40.414 13:43:35 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.414 13:43:35 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.414 13:43:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:40.671 ************************************ 00:04:40.671 START TEST default_setup 00:04:40.671 ************************************ 00:04:40.671 13:43:35 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:40.671 13:43:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:40.671 13:43:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:40.671 13:43:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:40.671 13:43:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:40.671 13:43:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:40.671 13:43:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:40.671 13:43:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:40.671 13:43:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:40.671 13:43:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:40.671 13:43:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:40.671 13:43:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:40.671 13:43:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:40.671 13:43:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:40.671 13:43:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:40.671 13:43:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:40.671 13:43:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:40.671 13:43:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:40.671 13:43:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:40.671 13:43:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:40.671 13:43:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:40.671 13:43:35 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.671 13:43:35 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:42.044 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:42.044 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:42.044 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:42.044 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:42.044 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:42.044 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:42.044 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:42.044 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:42.044 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:42.044 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:42.044 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:42.044 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:42.044 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:42.044 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:42.044 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:42.044 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:42.991 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:04:42.991 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:42.991 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:42.991 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.991 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.991 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:42.991 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:42.991 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:42.991 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:42.991 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.991 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.991 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:42.991 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:42.991 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:42.991 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.991 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.991 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.991 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.991 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.991 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.991 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.991 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29462804 kB' 'MemAvailable: 33033536 kB' 'Buffers: 2704 kB' 'Cached: 10020324 kB' 'SwapCached: 0 kB' 'Active: 7055704 kB' 'Inactive: 3505248 kB' 'Active(anon): 6666164 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541216 kB' 'Mapped: 210760 kB' 'Shmem: 6128240 kB' 'KReclaimable: 175536 kB' 'Slab: 516868 kB' 'SReclaimable: 175536 kB' 'SUnreclaim: 341332 kB' 'KernelStack: 12592 kB' 'PageTables: 8532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7796644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195680 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1615452 kB' 'DirectMap2M: 16130048 kB' 'DirectMap1G: 34603008 kB' 00:04:42.991 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.991 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.991 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.991 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.991 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.991 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.992 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29464368 kB' 'MemAvailable: 33035100 kB' 'Buffers: 2704 kB' 'Cached: 10020324 kB' 'SwapCached: 0 kB' 'Active: 7055032 kB' 'Inactive: 3505248 kB' 'Active(anon): 6665492 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540552 kB' 'Mapped: 210676 kB' 'Shmem: 6128240 kB' 'KReclaimable: 175536 kB' 'Slab: 516868 kB' 'SReclaimable: 175536 kB' 'SUnreclaim: 341332 kB' 'KernelStack: 12384 kB' 'PageTables: 7948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7796664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195552 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1615452 kB' 'DirectMap2M: 16130048 kB' 'DirectMap1G: 34603008 kB' 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.993 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.994 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.995 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29464268 kB' 'MemAvailable: 33035000 kB' 'Buffers: 2704 kB' 'Cached: 10020344 kB' 'SwapCached: 0 kB' 'Active: 7054648 kB' 'Inactive: 3505248 kB' 'Active(anon): 6665108 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540140 kB' 'Mapped: 210600 kB' 'Shmem: 6128260 kB' 'KReclaimable: 175536 kB' 'Slab: 516868 kB' 'SReclaimable: 175536 kB' 'SUnreclaim: 341332 kB' 'KernelStack: 12320 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7796684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195536 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1615452 kB' 'DirectMap2M: 16130048 kB' 'DirectMap1G: 34603008 kB' 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.996 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.997 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:42.998 nr_hugepages=1024 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.998 resv_hugepages=0 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.998 surplus_hugepages=0 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:42.998 anon_hugepages=0 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29464268 kB' 'MemAvailable: 33035000 kB' 'Buffers: 2704 kB' 'Cached: 10020368 kB' 'SwapCached: 0 kB' 'Active: 7054672 kB' 'Inactive: 3505248 kB' 'Active(anon): 6665132 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540140 kB' 'Mapped: 210600 kB' 'Shmem: 6128284 kB' 'KReclaimable: 175536 kB' 'Slab: 516868 kB' 'SReclaimable: 175536 kB' 'SUnreclaim: 341332 kB' 'KernelStack: 12320 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7796708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195536 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1615452 kB' 'DirectMap2M: 16130048 kB' 'DirectMap1G: 34603008 kB' 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.998 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.999 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 20480940 kB' 'MemUsed: 4091416 kB' 'SwapCached: 0 kB' 'Active: 1342260 kB' 'Inactive: 73848 kB' 'Active(anon): 1212992 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 73848 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1080016 kB' 'Mapped: 78952 kB' 'AnonPages: 339188 kB' 'Shmem: 876900 kB' 'KernelStack: 7208 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 43792 kB' 'Slab: 195104 kB' 'SReclaimable: 43792 kB' 'SUnreclaim: 151312 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.000 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:43.001 13:43:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:43.002 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.002 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.002 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.002 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.002 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:43.002 node0=1024 expecting 1024 00:04:43.002 13:43:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:43.002 00:04:43.002 real 0m2.482s 00:04:43.002 user 0m0.701s 00:04:43.002 sys 0m0.908s 00:04:43.002 13:43:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.002 13:43:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:43.002 ************************************ 00:04:43.002 END TEST default_setup 00:04:43.002 ************************************ 00:04:43.002 13:43:37 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:43.002 13:43:37 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:43.002 13:43:37 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.002 13:43:37 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.002 13:43:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:43.002 ************************************ 00:04:43.002 START TEST per_node_1G_alloc 00:04:43.002 ************************************ 00:04:43.002 13:43:37 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:43.002 13:43:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:43.002 13:43:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:43.002 13:43:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:43.002 13:43:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:43.002 13:43:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:43.002 13:43:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:43.002 13:43:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:43.002 13:43:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:43.002 13:43:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:43.002 13:43:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:43.002 13:43:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:43.002 13:43:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.002 13:43:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:43.002 13:43:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:43.002 13:43:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.002 13:43:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.002 13:43:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:43.002 13:43:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:43.002 13:43:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:43.002 13:43:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:43.002 13:43:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:43.002 13:43:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:43.002 13:43:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:43.002 13:43:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:43.002 13:43:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:43.002 13:43:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.002 13:43:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:44.378 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:44.378 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:44.378 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:44.378 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:44.378 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:44.378 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:44.378 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:44.378 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:44.379 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:44.379 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:44.379 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:44.379 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:44.379 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:44.379 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:44.379 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:44.379 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:44.379 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29449476 kB' 'MemAvailable: 33020208 kB' 'Buffers: 2704 kB' 'Cached: 10020448 kB' 'SwapCached: 0 kB' 'Active: 7061548 kB' 'Inactive: 3505248 kB' 'Active(anon): 6672008 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546872 kB' 'Mapped: 211528 kB' 'Shmem: 6128364 kB' 'KReclaimable: 175536 kB' 'Slab: 516892 kB' 'SReclaimable: 175536 kB' 'SUnreclaim: 341356 kB' 'KernelStack: 12272 kB' 'PageTables: 8092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7803016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195636 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1615452 kB' 'DirectMap2M: 16130048 kB' 'DirectMap1G: 34603008 kB' 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29449364 kB' 'MemAvailable: 33020096 kB' 'Buffers: 2704 kB' 'Cached: 10020448 kB' 'SwapCached: 0 kB' 'Active: 7055652 kB' 'Inactive: 3505248 kB' 'Active(anon): 6666112 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540936 kB' 'Mapped: 211072 kB' 'Shmem: 6128364 kB' 'KReclaimable: 175536 kB' 'Slab: 516868 kB' 'SReclaimable: 175536 kB' 'SUnreclaim: 341332 kB' 'KernelStack: 12256 kB' 'PageTables: 8040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7796916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195632 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1615452 kB' 'DirectMap2M: 16130048 kB' 'DirectMap1G: 34603008 kB' 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.381 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29449700 kB' 'MemAvailable: 33020432 kB' 'Buffers: 2704 kB' 'Cached: 10020468 kB' 'SwapCached: 0 kB' 'Active: 7054900 kB' 'Inactive: 3505248 kB' 'Active(anon): 6665360 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540212 kB' 'Mapped: 210640 kB' 'Shmem: 6128384 kB' 'KReclaimable: 175536 kB' 'Slab: 516980 kB' 'SReclaimable: 175536 kB' 'SUnreclaim: 341444 kB' 'KernelStack: 12320 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7796936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195648 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1615452 kB' 'DirectMap2M: 16130048 kB' 'DirectMap1G: 34603008 kB' 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.645 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:44.646 nr_hugepages=1024 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:44.646 resv_hugepages=0 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:44.646 surplus_hugepages=0 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:44.646 anon_hugepages=0 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:44.646 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29450352 kB' 'MemAvailable: 33021084 kB' 'Buffers: 2704 kB' 'Cached: 10020492 kB' 'SwapCached: 0 kB' 'Active: 7054960 kB' 'Inactive: 3505248 kB' 'Active(anon): 6665420 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540288 kB' 'Mapped: 210640 kB' 'Shmem: 6128408 kB' 'KReclaimable: 175536 kB' 'Slab: 516980 kB' 'SReclaimable: 175536 kB' 'SUnreclaim: 341444 kB' 'KernelStack: 12352 kB' 'PageTables: 8216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7796960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195648 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1615452 kB' 'DirectMap2M: 16130048 kB' 'DirectMap1G: 34603008 kB' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.647 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 21532116 kB' 'MemUsed: 3040240 kB' 'SwapCached: 0 kB' 'Active: 1341788 kB' 'Inactive: 73848 kB' 'Active(anon): 1212520 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 73848 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1080024 kB' 'Mapped: 78952 kB' 'AnonPages: 338728 kB' 'Shmem: 876908 kB' 'KernelStack: 7240 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 43792 kB' 'Slab: 195168 kB' 'SReclaimable: 43792 kB' 'SUnreclaim: 151376 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.648 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454316 kB' 'MemFree: 7918236 kB' 'MemUsed: 11536080 kB' 'SwapCached: 0 kB' 'Active: 5713200 kB' 'Inactive: 3431400 kB' 'Active(anon): 5452928 kB' 'Inactive(anon): 0 kB' 'Active(file): 260272 kB' 'Inactive(file): 3431400 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8943212 kB' 'Mapped: 131688 kB' 'AnonPages: 201524 kB' 'Shmem: 5251540 kB' 'KernelStack: 5096 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131744 kB' 'Slab: 321816 kB' 'SReclaimable: 131744 kB' 'SUnreclaim: 190072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.649 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:44.650 node0=512 expecting 512 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:44.650 node1=512 expecting 512 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:44.650 00:04:44.650 real 0m1.530s 00:04:44.650 user 0m0.608s 00:04:44.650 sys 0m0.899s 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.650 13:43:39 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:44.650 ************************************ 00:04:44.650 END TEST per_node_1G_alloc 00:04:44.650 ************************************ 00:04:44.650 13:43:39 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:44.650 13:43:39 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:44.650 13:43:39 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.650 13:43:39 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.650 13:43:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:44.650 ************************************ 00:04:44.650 START TEST even_2G_alloc 00:04:44.650 ************************************ 00:04:44.650 13:43:39 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:44.650 13:43:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:44.650 13:43:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:44.650 13:43:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:44.650 13:43:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:44.650 13:43:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:44.650 13:43:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:44.650 13:43:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:44.650 13:43:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:44.650 13:43:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:44.650 13:43:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:44.650 13:43:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:44.650 13:43:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:44.650 13:43:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:44.650 13:43:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:44.650 13:43:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:44.650 13:43:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:44.650 13:43:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:44.650 13:43:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:44.650 13:43:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:44.650 13:43:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:44.650 13:43:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:44.650 13:43:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:44.650 13:43:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:44.650 13:43:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:44.650 13:43:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:44.650 13:43:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:44.650 13:43:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.650 13:43:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:46.032 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:46.032 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:46.032 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:46.032 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:46.032 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:46.032 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:46.032 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:46.032 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:46.032 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:46.032 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:46.032 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:46.032 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:46.032 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:46.032 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:46.032 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:46.032 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:46.032 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29433120 kB' 'MemAvailable: 33003824 kB' 'Buffers: 2704 kB' 'Cached: 10020580 kB' 'SwapCached: 0 kB' 'Active: 7055268 kB' 'Inactive: 3505248 kB' 'Active(anon): 6665728 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540556 kB' 'Mapped: 210772 kB' 'Shmem: 6128496 kB' 'KReclaimable: 175480 kB' 'Slab: 517032 kB' 'SReclaimable: 175480 kB' 'SUnreclaim: 341552 kB' 'KernelStack: 12384 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7797156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195680 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1615452 kB' 'DirectMap2M: 16130048 kB' 'DirectMap1G: 34603008 kB' 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.032 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29434680 kB' 'MemAvailable: 33005384 kB' 'Buffers: 2704 kB' 'Cached: 10020584 kB' 'SwapCached: 0 kB' 'Active: 7054780 kB' 'Inactive: 3505248 kB' 'Active(anon): 6665240 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540064 kB' 'Mapped: 210620 kB' 'Shmem: 6128500 kB' 'KReclaimable: 175480 kB' 'Slab: 517032 kB' 'SReclaimable: 175480 kB' 'SUnreclaim: 341552 kB' 'KernelStack: 12368 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7797176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1615452 kB' 'DirectMap2M: 16130048 kB' 'DirectMap1G: 34603008 kB' 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.033 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.034 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29434680 kB' 'MemAvailable: 33005384 kB' 'Buffers: 2704 kB' 'Cached: 10020584 kB' 'SwapCached: 0 kB' 'Active: 7054500 kB' 'Inactive: 3505248 kB' 'Active(anon): 6664960 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539784 kB' 'Mapped: 210620 kB' 'Shmem: 6128500 kB' 'KReclaimable: 175480 kB' 'Slab: 517032 kB' 'SReclaimable: 175480 kB' 'SUnreclaim: 341552 kB' 'KernelStack: 12368 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7797196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1615452 kB' 'DirectMap2M: 16130048 kB' 'DirectMap1G: 34603008 kB' 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.035 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.036 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:46.037 nr_hugepages=1024 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.037 resv_hugepages=0 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.037 surplus_hugepages=0 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.037 anon_hugepages=0 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29434176 kB' 'MemAvailable: 33004880 kB' 'Buffers: 2704 kB' 'Cached: 10020588 kB' 'SwapCached: 0 kB' 'Active: 7054656 kB' 'Inactive: 3505248 kB' 'Active(anon): 6665116 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539936 kB' 'Mapped: 210620 kB' 'Shmem: 6128504 kB' 'KReclaimable: 175480 kB' 'Slab: 517032 kB' 'SReclaimable: 175480 kB' 'SUnreclaim: 341552 kB' 'KernelStack: 12368 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7797220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1615452 kB' 'DirectMap2M: 16130048 kB' 'DirectMap1G: 34603008 kB' 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.037 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.038 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 21521688 kB' 'MemUsed: 3050668 kB' 'SwapCached: 0 kB' 'Active: 1342156 kB' 'Inactive: 73848 kB' 'Active(anon): 1212888 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 73848 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1080032 kB' 'Mapped: 78952 kB' 'AnonPages: 339180 kB' 'Shmem: 876916 kB' 'KernelStack: 7304 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 43800 kB' 'Slab: 195196 kB' 'SReclaimable: 43800 kB' 'SUnreclaim: 151396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.039 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.040 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454316 kB' 'MemFree: 7918176 kB' 'MemUsed: 11536140 kB' 'SwapCached: 0 kB' 'Active: 5712444 kB' 'Inactive: 3431400 kB' 'Active(anon): 5452172 kB' 'Inactive(anon): 0 kB' 'Active(file): 260272 kB' 'Inactive(file): 3431400 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8943336 kB' 'Mapped: 131668 kB' 'AnonPages: 200564 kB' 'Shmem: 5251664 kB' 'KernelStack: 5064 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131680 kB' 'Slab: 321800 kB' 'SReclaimable: 131680 kB' 'SUnreclaim: 190120 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.041 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:46.042 node0=512 expecting 512 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:46.042 node1=512 expecting 512 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:46.042 00:04:46.042 real 0m1.384s 00:04:46.042 user 0m0.572s 00:04:46.042 sys 0m0.785s 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.042 13:43:40 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:46.042 ************************************ 00:04:46.042 END TEST even_2G_alloc 00:04:46.042 ************************************ 00:04:46.042 13:43:40 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:46.042 13:43:40 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:46.042 13:43:40 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.042 13:43:40 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.042 13:43:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:46.042 ************************************ 00:04:46.042 START TEST odd_alloc 00:04:46.042 ************************************ 00:04:46.042 13:43:40 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:46.042 13:43:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:46.042 13:43:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:46.042 13:43:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:46.042 13:43:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:46.042 13:43:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:46.042 13:43:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:46.042 13:43:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:46.042 13:43:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:46.042 13:43:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:46.042 13:43:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:46.042 13:43:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:46.042 13:43:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:46.042 13:43:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:46.042 13:43:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:46.042 13:43:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.042 13:43:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:46.042 13:43:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:46.042 13:43:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:46.042 13:43:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.042 13:43:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:46.042 13:43:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:46.042 13:43:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:46.042 13:43:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.042 13:43:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:46.042 13:43:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:46.042 13:43:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:46.042 13:43:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.042 13:43:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:47.436 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:47.436 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:47.436 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:47.436 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:47.436 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:47.436 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:47.436 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:47.436 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:47.436 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:47.436 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:47.436 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:47.436 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:47.436 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:47.436 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:47.436 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:47.436 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:47.436 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29454172 kB' 'MemAvailable: 33024868 kB' 'Buffers: 2704 kB' 'Cached: 10020708 kB' 'SwapCached: 0 kB' 'Active: 7053488 kB' 'Inactive: 3505248 kB' 'Active(anon): 6663948 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538388 kB' 'Mapped: 209800 kB' 'Shmem: 6128624 kB' 'KReclaimable: 175464 kB' 'Slab: 516776 kB' 'SReclaimable: 175464 kB' 'SUnreclaim: 341312 kB' 'KernelStack: 12656 kB' 'PageTables: 8960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 7786080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1615452 kB' 'DirectMap2M: 16130048 kB' 'DirectMap1G: 34603008 kB' 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.436 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.437 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29454196 kB' 'MemAvailable: 33024892 kB' 'Buffers: 2704 kB' 'Cached: 10020708 kB' 'SwapCached: 0 kB' 'Active: 7053716 kB' 'Inactive: 3505248 kB' 'Active(anon): 6664176 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539068 kB' 'Mapped: 209880 kB' 'Shmem: 6128624 kB' 'KReclaimable: 175464 kB' 'Slab: 516776 kB' 'SReclaimable: 175464 kB' 'SUnreclaim: 341312 kB' 'KernelStack: 12592 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 7783740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195696 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1615452 kB' 'DirectMap2M: 16130048 kB' 'DirectMap1G: 34603008 kB' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.438 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29454728 kB' 'MemAvailable: 33025424 kB' 'Buffers: 2704 kB' 'Cached: 10020732 kB' 'SwapCached: 0 kB' 'Active: 7051576 kB' 'Inactive: 3505248 kB' 'Active(anon): 6662036 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536596 kB' 'Mapped: 209780 kB' 'Shmem: 6128648 kB' 'KReclaimable: 175464 kB' 'Slab: 516724 kB' 'SReclaimable: 175464 kB' 'SUnreclaim: 341260 kB' 'KernelStack: 12352 kB' 'PageTables: 7880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 7784132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1615452 kB' 'DirectMap2M: 16130048 kB' 'DirectMap1G: 34603008 kB' 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.439 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.440 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.441 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:47.704 nr_hugepages=1025 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:47.704 resv_hugepages=0 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:47.704 surplus_hugepages=0 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:47.704 anon_hugepages=0 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29454728 kB' 'MemAvailable: 33025424 kB' 'Buffers: 2704 kB' 'Cached: 10020756 kB' 'SwapCached: 0 kB' 'Active: 7051796 kB' 'Inactive: 3505248 kB' 'Active(anon): 6662256 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536812 kB' 'Mapped: 209780 kB' 'Shmem: 6128672 kB' 'KReclaimable: 175464 kB' 'Slab: 516724 kB' 'SReclaimable: 175464 kB' 'SUnreclaim: 341260 kB' 'KernelStack: 12336 kB' 'PageTables: 7824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 7784152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1615452 kB' 'DirectMap2M: 16130048 kB' 'DirectMap1G: 34603008 kB' 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.704 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.705 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 21527064 kB' 'MemUsed: 3045292 kB' 'SwapCached: 0 kB' 'Active: 1340908 kB' 'Inactive: 73848 kB' 'Active(anon): 1211640 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 73848 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1080036 kB' 'Mapped: 78952 kB' 'AnonPages: 337840 kB' 'Shmem: 876920 kB' 'KernelStack: 7288 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 43800 kB' 'Slab: 195060 kB' 'SReclaimable: 43800 kB' 'SUnreclaim: 151260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.706 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454316 kB' 'MemFree: 7929464 kB' 'MemUsed: 11524852 kB' 'SwapCached: 0 kB' 'Active: 5710940 kB' 'Inactive: 3431400 kB' 'Active(anon): 5450668 kB' 'Inactive(anon): 0 kB' 'Active(file): 260272 kB' 'Inactive(file): 3431400 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8943468 kB' 'Mapped: 130828 kB' 'AnonPages: 199012 kB' 'Shmem: 5251796 kB' 'KernelStack: 5064 kB' 'PageTables: 3660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131664 kB' 'Slab: 321664 kB' 'SReclaimable: 131664 kB' 'SUnreclaim: 190000 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.707 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:47.708 node0=512 expecting 513 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:47.708 node1=513 expecting 512 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:47.708 00:04:47.708 real 0m1.537s 00:04:47.708 user 0m0.637s 00:04:47.708 sys 0m0.878s 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.708 13:43:42 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:47.708 ************************************ 00:04:47.708 END TEST odd_alloc 00:04:47.708 ************************************ 00:04:47.708 13:43:42 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:47.708 13:43:42 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:47.708 13:43:42 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.708 13:43:42 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.708 13:43:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:47.708 ************************************ 00:04:47.708 START TEST custom_alloc 00:04:47.708 ************************************ 00:04:47.708 13:43:42 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:47.708 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:47.708 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:47.708 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:47.708 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:47.708 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.709 13:43:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:49.087 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:49.087 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:49.087 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:49.087 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:49.087 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:49.087 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:49.087 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:49.087 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:49.087 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:49.087 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:49.087 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:49.087 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:49.087 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:49.087 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:49.087 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:49.087 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:49.087 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 28408024 kB' 'MemAvailable: 31978720 kB' 'Buffers: 2704 kB' 'Cached: 10020844 kB' 'SwapCached: 0 kB' 'Active: 7053616 kB' 'Inactive: 3505248 kB' 'Active(anon): 6664076 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539004 kB' 'Mapped: 210568 kB' 'Shmem: 6128760 kB' 'KReclaimable: 175464 kB' 'Slab: 516728 kB' 'SReclaimable: 175464 kB' 'SUnreclaim: 341264 kB' 'KernelStack: 12336 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 7786632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195680 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1615452 kB' 'DirectMap2M: 16130048 kB' 'DirectMap1G: 34603008 kB' 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.087 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.088 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 28404632 kB' 'MemAvailable: 31975328 kB' 'Buffers: 2704 kB' 'Cached: 10020844 kB' 'SwapCached: 0 kB' 'Active: 7057296 kB' 'Inactive: 3505248 kB' 'Active(anon): 6667756 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542184 kB' 'Mapped: 210296 kB' 'Shmem: 6128760 kB' 'KReclaimable: 175464 kB' 'Slab: 516728 kB' 'SReclaimable: 175464 kB' 'SUnreclaim: 341264 kB' 'KernelStack: 12320 kB' 'PageTables: 7736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 7790360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195636 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1615452 kB' 'DirectMap2M: 16130048 kB' 'DirectMap1G: 34603008 kB' 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.089 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.090 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 28407080 kB' 'MemAvailable: 31977776 kB' 'Buffers: 2704 kB' 'Cached: 10020856 kB' 'SwapCached: 0 kB' 'Active: 7057284 kB' 'Inactive: 3505248 kB' 'Active(anon): 6667744 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542200 kB' 'Mapped: 210644 kB' 'Shmem: 6128772 kB' 'KReclaimable: 175464 kB' 'Slab: 516724 kB' 'SReclaimable: 175464 kB' 'SUnreclaim: 341260 kB' 'KernelStack: 12384 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 7790380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195636 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1615452 kB' 'DirectMap2M: 16130048 kB' 'DirectMap1G: 34603008 kB' 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.091 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:49.092 nr_hugepages=1536 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:49.092 resv_hugepages=0 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:49.092 surplus_hugepages=0 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:49.092 anon_hugepages=0 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.092 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 28415872 kB' 'MemAvailable: 31986568 kB' 'Buffers: 2704 kB' 'Cached: 10020888 kB' 'SwapCached: 0 kB' 'Active: 7051812 kB' 'Inactive: 3505248 kB' 'Active(anon): 6662272 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536704 kB' 'Mapped: 209792 kB' 'Shmem: 6128804 kB' 'KReclaimable: 175464 kB' 'Slab: 516708 kB' 'SReclaimable: 175464 kB' 'SUnreclaim: 341244 kB' 'KernelStack: 12368 kB' 'PageTables: 7824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 7784280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195632 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1615452 kB' 'DirectMap2M: 16130048 kB' 'DirectMap1G: 34603008 kB' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.093 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 21526988 kB' 'MemUsed: 3045368 kB' 'SwapCached: 0 kB' 'Active: 1340872 kB' 'Inactive: 73848 kB' 'Active(anon): 1211604 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 73848 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1080040 kB' 'Mapped: 78952 kB' 'AnonPages: 337800 kB' 'Shmem: 876924 kB' 'KernelStack: 7304 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 43800 kB' 'Slab: 194988 kB' 'SReclaimable: 43800 kB' 'SUnreclaim: 151188 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.094 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.095 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454316 kB' 'MemFree: 6888796 kB' 'MemUsed: 12565520 kB' 'SwapCached: 0 kB' 'Active: 5711124 kB' 'Inactive: 3431400 kB' 'Active(anon): 5450852 kB' 'Inactive(anon): 0 kB' 'Active(file): 260272 kB' 'Inactive(file): 3431400 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8943592 kB' 'Mapped: 130840 kB' 'AnonPages: 199108 kB' 'Shmem: 5251920 kB' 'KernelStack: 5080 kB' 'PageTables: 3660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131664 kB' 'Slab: 321720 kB' 'SReclaimable: 131664 kB' 'SUnreclaim: 190056 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.096 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:49.097 node0=512 expecting 512 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:49.097 node1=1024 expecting 1024 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:49.097 00:04:49.097 real 0m1.459s 00:04:49.097 user 0m0.644s 00:04:49.097 sys 0m0.789s 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.097 13:43:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:49.097 ************************************ 00:04:49.097 END TEST custom_alloc 00:04:49.097 ************************************ 00:04:49.097 13:43:43 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:49.097 13:43:43 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:49.097 13:43:43 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.097 13:43:43 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.097 13:43:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:49.097 ************************************ 00:04:49.097 START TEST no_shrink_alloc 00:04:49.097 ************************************ 00:04:49.097 13:43:43 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:49.097 13:43:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:49.097 13:43:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:49.097 13:43:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:49.097 13:43:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:49.097 13:43:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:49.097 13:43:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:49.097 13:43:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:49.097 13:43:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:49.097 13:43:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:49.097 13:43:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:49.097 13:43:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:49.097 13:43:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:49.097 13:43:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:49.097 13:43:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:49.097 13:43:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:49.097 13:43:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:49.097 13:43:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:49.097 13:43:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:49.097 13:43:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:49.097 13:43:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:49.097 13:43:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.097 13:43:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:50.477 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:50.477 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:50.477 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:50.477 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:50.477 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:50.477 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:50.477 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:50.477 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:50.477 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:50.477 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:50.478 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:50.478 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:50.478 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:50.478 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:50.478 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:50.478 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:50.478 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29462764 kB' 'MemAvailable: 33033460 kB' 'Buffers: 2704 kB' 'Cached: 10020972 kB' 'SwapCached: 0 kB' 'Active: 7052404 kB' 'Inactive: 3505248 kB' 'Active(anon): 6662864 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536784 kB' 'Mapped: 209896 kB' 'Shmem: 6128888 kB' 'KReclaimable: 175464 kB' 'Slab: 516864 kB' 'SReclaimable: 175464 kB' 'SUnreclaim: 341400 kB' 'KernelStack: 12384 kB' 'PageTables: 7836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7784676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1615452 kB' 'DirectMap2M: 16130048 kB' 'DirectMap1G: 34603008 kB' 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.478 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29465412 kB' 'MemAvailable: 33036108 kB' 'Buffers: 2704 kB' 'Cached: 10020972 kB' 'SwapCached: 0 kB' 'Active: 7052212 kB' 'Inactive: 3505248 kB' 'Active(anon): 6662672 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537040 kB' 'Mapped: 209812 kB' 'Shmem: 6128888 kB' 'KReclaimable: 175464 kB' 'Slab: 516848 kB' 'SReclaimable: 175464 kB' 'SUnreclaim: 341384 kB' 'KernelStack: 12400 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7784692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195616 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1615452 kB' 'DirectMap2M: 16130048 kB' 'DirectMap1G: 34603008 kB' 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.479 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.480 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29465392 kB' 'MemAvailable: 33036088 kB' 'Buffers: 2704 kB' 'Cached: 10020976 kB' 'SwapCached: 0 kB' 'Active: 7051876 kB' 'Inactive: 3505248 kB' 'Active(anon): 6662336 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536696 kB' 'Mapped: 209812 kB' 'Shmem: 6128892 kB' 'KReclaimable: 175464 kB' 'Slab: 516948 kB' 'SReclaimable: 175464 kB' 'SUnreclaim: 341484 kB' 'KernelStack: 12416 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7784716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195616 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1615452 kB' 'DirectMap2M: 16130048 kB' 'DirectMap1G: 34603008 kB' 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.481 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.482 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.483 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:50.743 nr_hugepages=1024 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:50.743 resv_hugepages=0 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:50.743 surplus_hugepages=0 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:50.743 anon_hugepages=0 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29465488 kB' 'MemAvailable: 33036184 kB' 'Buffers: 2704 kB' 'Cached: 10021012 kB' 'SwapCached: 0 kB' 'Active: 7052228 kB' 'Inactive: 3505248 kB' 'Active(anon): 6662688 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537012 kB' 'Mapped: 209812 kB' 'Shmem: 6128928 kB' 'KReclaimable: 175464 kB' 'Slab: 516948 kB' 'SReclaimable: 175464 kB' 'SUnreclaim: 341484 kB' 'KernelStack: 12432 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7784736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195616 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1615452 kB' 'DirectMap2M: 16130048 kB' 'DirectMap1G: 34603008 kB' 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.743 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.744 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 20488064 kB' 'MemUsed: 4084292 kB' 'SwapCached: 0 kB' 'Active: 1340732 kB' 'Inactive: 73848 kB' 'Active(anon): 1211464 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 73848 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1080048 kB' 'Mapped: 78952 kB' 'AnonPages: 337644 kB' 'Shmem: 876932 kB' 'KernelStack: 7352 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 43800 kB' 'Slab: 195120 kB' 'SReclaimable: 43800 kB' 'SUnreclaim: 151320 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.745 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:50.746 node0=1024 expecting 1024 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:50.746 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:50.747 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:50.747 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.747 13:43:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:52.129 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:52.129 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:52.129 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:52.129 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:52.130 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:52.130 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:52.130 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:52.130 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:52.130 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:52.130 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:52.130 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:52.130 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:52.130 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:52.130 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:52.130 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:52.130 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:52.130 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:52.130 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29454572 kB' 'MemAvailable: 33025268 kB' 'Buffers: 2704 kB' 'Cached: 10021084 kB' 'SwapCached: 0 kB' 'Active: 7055188 kB' 'Inactive: 3505248 kB' 'Active(anon): 6665648 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539968 kB' 'Mapped: 209812 kB' 'Shmem: 6129000 kB' 'KReclaimable: 175464 kB' 'Slab: 516924 kB' 'SReclaimable: 175464 kB' 'SUnreclaim: 341460 kB' 'KernelStack: 12848 kB' 'PageTables: 8704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7787444 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1615452 kB' 'DirectMap2M: 16130048 kB' 'DirectMap1G: 34603008 kB' 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29452436 kB' 'MemAvailable: 33023132 kB' 'Buffers: 2704 kB' 'Cached: 10021088 kB' 'SwapCached: 0 kB' 'Active: 7054664 kB' 'Inactive: 3505248 kB' 'Active(anon): 6665124 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539408 kB' 'Mapped: 209884 kB' 'Shmem: 6129004 kB' 'KReclaimable: 175464 kB' 'Slab: 516964 kB' 'SReclaimable: 175464 kB' 'SUnreclaim: 341500 kB' 'KernelStack: 12592 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7787460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195904 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1615452 kB' 'DirectMap2M: 16130048 kB' 'DirectMap1G: 34603008 kB' 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29451964 kB' 'MemAvailable: 33022660 kB' 'Buffers: 2704 kB' 'Cached: 10021108 kB' 'SwapCached: 0 kB' 'Active: 7053488 kB' 'Inactive: 3505248 kB' 'Active(anon): 6663948 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538096 kB' 'Mapped: 209764 kB' 'Shmem: 6129024 kB' 'KReclaimable: 175464 kB' 'Slab: 516988 kB' 'SReclaimable: 175464 kB' 'SUnreclaim: 341524 kB' 'KernelStack: 12688 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7785124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195792 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1615452 kB' 'DirectMap2M: 16130048 kB' 'DirectMap1G: 34603008 kB' 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:52.135 nr_hugepages=1024 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:52.135 resv_hugepages=0 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:52.135 surplus_hugepages=0 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:52.135 anon_hugepages=0 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29453376 kB' 'MemAvailable: 33024072 kB' 'Buffers: 2704 kB' 'Cached: 10021108 kB' 'SwapCached: 0 kB' 'Active: 7052176 kB' 'Inactive: 3505248 kB' 'Active(anon): 6662636 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536864 kB' 'Mapped: 209824 kB' 'Shmem: 6129024 kB' 'KReclaimable: 175464 kB' 'Slab: 517092 kB' 'SReclaimable: 175464 kB' 'SUnreclaim: 341628 kB' 'KernelStack: 12368 kB' 'PageTables: 7464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7785144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195712 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1615452 kB' 'DirectMap2M: 16130048 kB' 'DirectMap1G: 34603008 kB' 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 20468528 kB' 'MemUsed: 4103828 kB' 'SwapCached: 0 kB' 'Active: 1341276 kB' 'Inactive: 73848 kB' 'Active(anon): 1212008 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 73848 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1080052 kB' 'Mapped: 78952 kB' 'AnonPages: 338276 kB' 'Shmem: 876936 kB' 'KernelStack: 7416 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 43800 kB' 'Slab: 195208 kB' 'SReclaimable: 43800 kB' 'SUnreclaim: 151408 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.137 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.139 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.139 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.139 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.139 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.139 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.139 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.139 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.139 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:52.139 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.139 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.139 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.139 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.139 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:52.139 node0=1024 expecting 1024 00:04:52.139 13:43:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:52.139 00:04:52.139 real 0m2.937s 00:04:52.139 user 0m1.222s 00:04:52.139 sys 0m1.669s 00:04:52.139 13:43:46 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.139 13:43:46 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:52.139 ************************************ 00:04:52.139 END TEST no_shrink_alloc 00:04:52.139 ************************************ 00:04:52.139 13:43:46 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:52.139 13:43:46 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:52.139 13:43:46 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:52.139 13:43:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:52.139 13:43:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:52.139 13:43:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:52.139 13:43:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:52.139 13:43:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:52.139 13:43:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:52.139 13:43:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:52.139 13:43:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:52.139 13:43:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:52.139 13:43:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:52.139 13:43:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:52.139 13:43:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:52.139 00:04:52.139 real 0m11.729s 00:04:52.139 user 0m4.545s 00:04:52.139 sys 0m6.187s 00:04:52.139 13:43:46 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.139 13:43:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:52.139 ************************************ 00:04:52.139 END TEST hugepages 00:04:52.139 ************************************ 00:04:52.139 13:43:46 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:52.139 13:43:46 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:52.139 13:43:46 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.139 13:43:46 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.139 13:43:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:52.139 ************************************ 00:04:52.139 START TEST driver 00:04:52.139 ************************************ 00:04:52.139 13:43:46 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:52.398 * Looking for test storage... 00:04:52.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:52.398 13:43:46 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:52.398 13:43:46 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:52.398 13:43:46 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:54.933 13:43:49 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:54.933 13:43:49 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.933 13:43:49 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.933 13:43:49 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:54.933 ************************************ 00:04:54.933 START TEST guess_driver 00:04:54.933 ************************************ 00:04:54.933 13:43:49 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:54.933 13:43:49 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:54.933 13:43:49 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:54.933 13:43:49 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:54.933 13:43:49 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:54.933 13:43:49 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:54.933 13:43:49 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:54.933 13:43:49 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:54.933 13:43:49 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:54.933 13:43:49 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:54.933 13:43:49 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 143 > 0 )) 00:04:54.933 13:43:49 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:54.933 13:43:49 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:54.933 13:43:49 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:54.933 13:43:49 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:54.933 13:43:49 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:54.933 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:54.933 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:54.933 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:54.933 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:54.933 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:54.933 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:54.933 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:54.933 13:43:49 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:54.933 13:43:49 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:54.933 13:43:49 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:54.933 13:43:49 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:54.933 13:43:49 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:54.933 Looking for driver=vfio-pci 00:04:54.933 13:43:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.933 13:43:49 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:54.933 13:43:49 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.933 13:43:49 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.308 13:43:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:57.241 13:43:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:57.241 13:43:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:57.241 13:43:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:57.241 13:43:51 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:57.241 13:43:51 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:57.241 13:43:51 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:57.241 13:43:51 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:59.797 00:04:59.797 real 0m5.001s 00:04:59.797 user 0m1.125s 00:04:59.797 sys 0m1.938s 00:04:59.797 13:43:54 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.797 13:43:54 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:59.797 ************************************ 00:04:59.797 END TEST guess_driver 00:04:59.797 ************************************ 00:04:59.797 13:43:54 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:59.797 00:04:59.797 real 0m7.698s 00:04:59.797 user 0m1.708s 00:04:59.797 sys 0m3.040s 00:04:59.797 13:43:54 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.797 13:43:54 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:59.797 ************************************ 00:04:59.797 END TEST driver 00:04:59.797 ************************************ 00:05:00.056 13:43:54 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:00.056 13:43:54 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:00.056 13:43:54 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.056 13:43:54 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.056 13:43:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:00.056 ************************************ 00:05:00.056 START TEST devices 00:05:00.056 ************************************ 00:05:00.056 13:43:54 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:00.056 * Looking for test storage... 00:05:00.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:00.056 13:43:54 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:00.056 13:43:54 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:00.056 13:43:54 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:00.056 13:43:54 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:01.429 13:43:56 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:01.429 13:43:56 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:01.429 13:43:56 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:01.429 13:43:56 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:01.429 13:43:56 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:01.429 13:43:56 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:01.429 13:43:56 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:01.429 13:43:56 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:01.429 13:43:56 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:01.429 13:43:56 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:01.429 13:43:56 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:01.429 13:43:56 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:01.429 13:43:56 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:01.429 13:43:56 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:01.429 13:43:56 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:01.429 13:43:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:01.429 13:43:56 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:01.429 13:43:56 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:82:00.0 00:05:01.429 13:43:56 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:05:01.429 13:43:56 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:01.429 13:43:56 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:01.429 13:43:56 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:01.688 No valid GPT data, bailing 00:05:01.688 13:43:56 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:01.688 13:43:56 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:01.688 13:43:56 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:01.688 13:43:56 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:01.688 13:43:56 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:01.688 13:43:56 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:01.688 13:43:56 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:01.688 13:43:56 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:01.688 13:43:56 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:01.688 13:43:56 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:82:00.0 00:05:01.688 13:43:56 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:01.688 13:43:56 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:01.688 13:43:56 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:01.688 13:43:56 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.688 13:43:56 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.688 13:43:56 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:01.688 ************************************ 00:05:01.688 START TEST nvme_mount 00:05:01.688 ************************************ 00:05:01.688 13:43:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:01.688 13:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:01.688 13:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:01.688 13:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:01.688 13:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:01.688 13:43:56 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:01.688 13:43:56 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:01.688 13:43:56 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:01.688 13:43:56 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:01.688 13:43:56 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:01.688 13:43:56 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:01.688 13:43:56 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:01.688 13:43:56 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:01.688 13:43:56 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:01.688 13:43:56 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:01.688 13:43:56 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:01.688 13:43:56 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:01.688 13:43:56 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:01.688 13:43:56 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:01.688 13:43:56 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:02.626 Creating new GPT entries in memory. 00:05:02.626 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:02.626 other utilities. 00:05:02.626 13:43:57 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:02.626 13:43:57 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:02.626 13:43:57 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:02.626 13:43:57 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:02.626 13:43:57 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:03.564 Creating new GPT entries in memory. 00:05:03.564 The operation has completed successfully. 00:05:03.564 13:43:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:03.564 13:43:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:03.564 13:43:58 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3621424 00:05:03.564 13:43:58 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.564 13:43:58 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:03.564 13:43:58 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.564 13:43:58 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:03.564 13:43:58 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:03.564 13:43:58 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.822 13:43:58 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:82:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:03.822 13:43:58 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:05:03.822 13:43:58 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:03.822 13:43:58 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.822 13:43:58 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:03.822 13:43:58 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:03.822 13:43:58 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:03.822 13:43:58 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:03.823 13:43:58 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:03.823 13:43:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.823 13:43:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:05:03.823 13:43:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:03.823 13:43:58 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.823 13:43:58 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:04.757 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.016 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:05.016 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:05.016 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.016 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:05.016 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:05.016 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:05.016 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.016 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.016 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:05.016 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:05.016 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:05.016 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:05.016 13:43:59 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:05.275 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:05.275 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:05.275 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:05.275 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:05.275 13:44:00 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:05.275 13:44:00 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:05.275 13:44:00 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.275 13:44:00 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:05.275 13:44:00 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:05.275 13:44:00 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.534 13:44:00 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:82:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:05.534 13:44:00 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:05:05.534 13:44:00 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:05.534 13:44:00 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.534 13:44:00 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:05.534 13:44:00 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:05.534 13:44:00 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:05.534 13:44:00 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:05.534 13:44:00 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:05.534 13:44:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.534 13:44:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:05:05.534 13:44:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:05.534 13:44:00 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.534 13:44:00 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.472 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.729 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:06.729 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:06.729 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.729 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:06.729 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:06.729 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.729 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:82:00.0 data@nvme0n1 '' '' 00:05:06.729 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:05:06.729 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:06.729 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:06.729 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:06.729 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:06.729 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:06.729 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:06.729 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.729 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:05:06.729 13:44:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:06.729 13:44:01 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.730 13:44:01 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:08.125 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:08.125 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:08.125 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:08.125 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.125 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:08.125 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.125 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:08.125 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.125 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:08.125 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.125 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:08.125 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.125 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:08.125 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.125 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:08.125 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.125 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:08.125 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.125 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:08.125 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.125 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:08.125 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.125 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:08.125 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.125 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:08.126 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.126 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:08.126 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.126 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:08.126 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.126 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:08.126 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.126 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:08.126 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.126 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:08.126 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.126 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:08.126 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:08.126 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:08.126 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:08.126 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.126 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:08.126 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:08.126 13:44:02 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:08.126 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:08.126 00:05:08.126 real 0m6.539s 00:05:08.126 user 0m1.565s 00:05:08.126 sys 0m2.593s 00:05:08.126 13:44:02 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.126 13:44:02 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:08.126 ************************************ 00:05:08.126 END TEST nvme_mount 00:05:08.126 ************************************ 00:05:08.126 13:44:02 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:08.126 13:44:02 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:08.126 13:44:02 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.126 13:44:02 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.126 13:44:02 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:08.126 ************************************ 00:05:08.126 START TEST dm_mount 00:05:08.126 ************************************ 00:05:08.126 13:44:02 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:08.126 13:44:02 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:08.126 13:44:02 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:08.126 13:44:02 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:08.126 13:44:02 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:08.126 13:44:02 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:08.126 13:44:02 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:08.126 13:44:02 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:08.126 13:44:02 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:08.126 13:44:02 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:08.126 13:44:02 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:08.126 13:44:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:08.126 13:44:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:08.126 13:44:02 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:08.126 13:44:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:08.126 13:44:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:08.126 13:44:02 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:08.126 13:44:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:08.126 13:44:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:08.126 13:44:02 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:08.126 13:44:02 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:08.126 13:44:02 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:09.063 Creating new GPT entries in memory. 00:05:09.063 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:09.063 other utilities. 00:05:09.063 13:44:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:09.063 13:44:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:09.063 13:44:03 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:09.063 13:44:03 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:09.063 13:44:03 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:10.443 Creating new GPT entries in memory. 00:05:10.443 The operation has completed successfully. 00:05:10.443 13:44:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:10.443 13:44:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:10.443 13:44:04 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:10.443 13:44:04 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:10.443 13:44:04 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:11.379 The operation has completed successfully. 00:05:11.379 13:44:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:11.379 13:44:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:11.379 13:44:05 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3623831 00:05:11.379 13:44:05 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:11.379 13:44:05 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:11.379 13:44:05 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:11.379 13:44:05 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:11.379 13:44:05 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:11.379 13:44:05 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:11.379 13:44:05 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:11.379 13:44:05 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:11.379 13:44:05 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:11.379 13:44:05 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:11.379 13:44:05 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:11.379 13:44:05 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:11.379 13:44:05 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:11.379 13:44:05 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:11.379 13:44:05 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:11.379 13:44:05 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:11.379 13:44:05 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:11.379 13:44:05 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:11.379 13:44:06 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:11.379 13:44:06 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:82:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:11.379 13:44:06 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:05:11.379 13:44:06 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:11.379 13:44:06 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:11.379 13:44:06 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:11.379 13:44:06 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:11.379 13:44:06 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:11.379 13:44:06 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:11.379 13:44:06 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:11.379 13:44:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.379 13:44:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:05:11.379 13:44:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:11.379 13:44:06 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.379 13:44:06 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:12.752 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.752 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:12.752 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:12.752 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.752 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.752 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.752 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.752 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.752 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.752 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.752 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.752 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.752 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.752 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.752 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.752 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.752 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.752 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:82:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.753 13:44:07 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:13.684 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.942 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:13.942 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:13.942 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:13.942 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:13.942 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:13.942 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:13.942 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:13.943 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:13.943 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:13.943 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:13.943 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:13.943 13:44:08 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:13.943 00:05:13.943 real 0m5.789s 00:05:13.943 user 0m1.001s 00:05:13.943 sys 0m1.672s 00:05:13.943 13:44:08 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.943 13:44:08 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:13.943 ************************************ 00:05:13.943 END TEST dm_mount 00:05:13.943 ************************************ 00:05:13.943 13:44:08 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:13.943 13:44:08 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:13.943 13:44:08 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:13.943 13:44:08 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.943 13:44:08 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:13.943 13:44:08 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:13.943 13:44:08 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:13.943 13:44:08 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:14.200 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:14.200 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:14.200 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:14.200 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:14.200 13:44:08 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:14.200 13:44:08 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:14.201 13:44:08 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:14.201 13:44:08 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:14.201 13:44:08 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:14.201 13:44:08 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:14.201 13:44:08 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:14.201 00:05:14.201 real 0m14.303s 00:05:14.201 user 0m3.280s 00:05:14.201 sys 0m5.307s 00:05:14.201 13:44:08 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.201 13:44:08 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:14.201 ************************************ 00:05:14.201 END TEST devices 00:05:14.201 ************************************ 00:05:14.201 13:44:08 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:14.201 00:05:14.201 real 0m45.137s 00:05:14.201 user 0m13.014s 00:05:14.201 sys 0m20.459s 00:05:14.201 13:44:08 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.201 13:44:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:14.201 ************************************ 00:05:14.201 END TEST setup.sh 00:05:14.201 ************************************ 00:05:14.201 13:44:09 -- common/autotest_common.sh@1142 -- # return 0 00:05:14.201 13:44:09 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:15.579 Hugepages 00:05:15.579 node hugesize free / total 00:05:15.579 node0 1048576kB 0 / 0 00:05:15.579 node0 2048kB 2048 / 2048 00:05:15.579 node1 1048576kB 0 / 0 00:05:15.579 node1 2048kB 0 / 0 00:05:15.579 00:05:15.579 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:15.579 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:15.579 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:15.579 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:15.579 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:15.579 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:15.579 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:15.579 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:15.579 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:15.579 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:15.579 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:15.579 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:15.579 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:15.579 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:15.579 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:15.579 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:15.579 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:15.579 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:15.579 13:44:10 -- spdk/autotest.sh@130 -- # uname -s 00:05:15.579 13:44:10 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:15.579 13:44:10 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:15.579 13:44:10 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:16.954 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:16.954 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:16.954 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:16.954 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:16.954 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:16.954 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:16.954 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:16.954 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:16.954 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:16.954 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:16.954 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:16.954 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:16.954 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:16.954 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:16.954 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:16.954 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:17.891 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:05:17.891 13:44:12 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:18.828 13:44:13 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:18.828 13:44:13 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:18.828 13:44:13 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:18.828 13:44:13 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:18.828 13:44:13 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:18.828 13:44:13 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:18.828 13:44:13 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:18.828 13:44:13 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:18.828 13:44:13 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:19.087 13:44:13 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:19.087 13:44:13 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:05:19.087 13:44:13 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:20.024 Waiting for block devices as requested 00:05:20.283 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:05:20.283 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:20.542 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:20.542 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:20.542 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:20.542 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:20.801 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:20.801 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:20.801 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:21.058 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:21.058 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:21.058 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:21.058 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:21.317 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:21.317 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:21.317 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:21.317 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:21.577 13:44:16 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:21.577 13:44:16 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:82:00.0 00:05:21.577 13:44:16 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:21.577 13:44:16 -- common/autotest_common.sh@1502 -- # grep 0000:82:00.0/nvme/nvme 00:05:21.577 13:44:16 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:05:21.577 13:44:16 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 ]] 00:05:21.577 13:44:16 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:05:21.577 13:44:16 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:21.577 13:44:16 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:21.577 13:44:16 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:21.577 13:44:16 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:21.577 13:44:16 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:21.577 13:44:16 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:21.577 13:44:16 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:05:21.577 13:44:16 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:21.577 13:44:16 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:21.577 13:44:16 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:21.577 13:44:16 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:21.577 13:44:16 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:21.577 13:44:16 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:21.577 13:44:16 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:21.577 13:44:16 -- common/autotest_common.sh@1557 -- # continue 00:05:21.577 13:44:16 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:21.577 13:44:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:21.577 13:44:16 -- common/autotest_common.sh@10 -- # set +x 00:05:21.577 13:44:16 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:21.577 13:44:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:21.577 13:44:16 -- common/autotest_common.sh@10 -- # set +x 00:05:21.577 13:44:16 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:22.954 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:22.954 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:22.954 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:22.954 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:22.954 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:22.954 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:22.954 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:22.954 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:22.954 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:22.954 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:22.954 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:22.954 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:22.954 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:22.954 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:22.954 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:22.954 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:23.890 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:05:23.890 13:44:18 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:23.890 13:44:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:23.890 13:44:18 -- common/autotest_common.sh@10 -- # set +x 00:05:23.890 13:44:18 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:23.890 13:44:18 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:23.890 13:44:18 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:23.890 13:44:18 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:23.890 13:44:18 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:23.890 13:44:18 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:23.890 13:44:18 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:23.890 13:44:18 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:23.890 13:44:18 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:23.890 13:44:18 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:23.890 13:44:18 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:23.890 13:44:18 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:23.890 13:44:18 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:05:23.890 13:44:18 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:23.890 13:44:18 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:82:00.0/device 00:05:23.890 13:44:18 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:23.890 13:44:18 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:23.890 13:44:18 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:23.890 13:44:18 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:82:00.0 00:05:23.890 13:44:18 -- common/autotest_common.sh@1592 -- # [[ -z 0000:82:00.0 ]] 00:05:23.890 13:44:18 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=3629176 00:05:23.890 13:44:18 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.890 13:44:18 -- common/autotest_common.sh@1598 -- # waitforlisten 3629176 00:05:23.890 13:44:18 -- common/autotest_common.sh@829 -- # '[' -z 3629176 ']' 00:05:23.890 13:44:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.890 13:44:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.890 13:44:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.890 13:44:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.890 13:44:18 -- common/autotest_common.sh@10 -- # set +x 00:05:24.148 [2024-07-15 13:44:18.778868] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:05:24.148 [2024-07-15 13:44:18.778939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3629176 ] 00:05:24.148 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.148 [2024-07-15 13:44:18.836430] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.148 [2024-07-15 13:44:18.948747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.406 13:44:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.406 13:44:19 -- common/autotest_common.sh@862 -- # return 0 00:05:24.406 13:44:19 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:24.406 13:44:19 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:24.406 13:44:19 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:82:00.0 00:05:27.721 nvme0n1 00:05:27.721 13:44:22 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:27.721 [2024-07-15 13:44:22.494413] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:27.721 [2024-07-15 13:44:22.494464] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:27.721 request: 00:05:27.721 { 00:05:27.721 "nvme_ctrlr_name": "nvme0", 00:05:27.721 "password": "test", 00:05:27.721 "method": "bdev_nvme_opal_revert", 00:05:27.721 "req_id": 1 00:05:27.721 } 00:05:27.721 Got JSON-RPC error response 00:05:27.721 response: 00:05:27.721 { 00:05:27.721 "code": -32603, 00:05:27.721 "message": "Internal error" 00:05:27.721 } 00:05:27.721 13:44:22 -- common/autotest_common.sh@1604 -- # true 00:05:27.721 13:44:22 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:27.721 13:44:22 -- common/autotest_common.sh@1608 -- # killprocess 3629176 00:05:27.721 13:44:22 -- common/autotest_common.sh@948 -- # '[' -z 3629176 ']' 00:05:27.721 13:44:22 -- common/autotest_common.sh@952 -- # kill -0 3629176 00:05:27.721 13:44:22 -- common/autotest_common.sh@953 -- # uname 00:05:27.721 13:44:22 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:27.721 13:44:22 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3629176 00:05:27.721 13:44:22 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:27.721 13:44:22 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:27.721 13:44:22 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3629176' 00:05:27.721 killing process with pid 3629176 00:05:27.721 13:44:22 -- common/autotest_common.sh@967 -- # kill 3629176 00:05:27.721 13:44:22 -- common/autotest_common.sh@972 -- # wait 3629176 00:05:29.675 13:44:24 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:29.675 13:44:24 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:29.675 13:44:24 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:29.675 13:44:24 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:29.675 13:44:24 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:29.675 13:44:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:29.675 13:44:24 -- common/autotest_common.sh@10 -- # set +x 00:05:29.675 13:44:24 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:29.675 13:44:24 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:29.675 13:44:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.675 13:44:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.675 13:44:24 -- common/autotest_common.sh@10 -- # set +x 00:05:29.675 ************************************ 00:05:29.675 START TEST env 00:05:29.675 ************************************ 00:05:29.675 13:44:24 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:29.675 * Looking for test storage... 00:05:29.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:29.675 13:44:24 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:29.675 13:44:24 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.675 13:44:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.675 13:44:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.675 ************************************ 00:05:29.675 START TEST env_memory 00:05:29.675 ************************************ 00:05:29.675 13:44:24 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:29.675 00:05:29.675 00:05:29.675 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.675 http://cunit.sourceforge.net/ 00:05:29.675 00:05:29.675 00:05:29.675 Suite: memory 00:05:29.675 Test: alloc and free memory map ...[2024-07-15 13:44:24.434858] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:29.675 passed 00:05:29.675 Test: mem map translation ...[2024-07-15 13:44:24.456528] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:29.675 [2024-07-15 13:44:24.456550] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:29.675 [2024-07-15 13:44:24.456608] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:29.675 [2024-07-15 13:44:24.456622] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:29.933 passed 00:05:29.933 Test: mem map registration ...[2024-07-15 13:44:24.499899] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:29.933 [2024-07-15 13:44:24.499920] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:29.933 passed 00:05:29.933 Test: mem map adjacent registrations ...passed 00:05:29.933 00:05:29.933 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.933 suites 1 1 n/a 0 0 00:05:29.933 tests 4 4 4 0 0 00:05:29.933 asserts 152 152 152 0 n/a 00:05:29.933 00:05:29.933 Elapsed time = 0.148 seconds 00:05:29.933 00:05:29.933 real 0m0.155s 00:05:29.933 user 0m0.147s 00:05:29.933 sys 0m0.007s 00:05:29.933 13:44:24 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.933 13:44:24 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:29.933 ************************************ 00:05:29.933 END TEST env_memory 00:05:29.933 ************************************ 00:05:29.933 13:44:24 env -- common/autotest_common.sh@1142 -- # return 0 00:05:29.933 13:44:24 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:29.933 13:44:24 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.933 13:44:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.933 13:44:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.933 ************************************ 00:05:29.933 START TEST env_vtophys 00:05:29.933 ************************************ 00:05:29.933 13:44:24 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:29.933 EAL: lib.eal log level changed from notice to debug 00:05:29.933 EAL: Detected lcore 0 as core 0 on socket 0 00:05:29.933 EAL: Detected lcore 1 as core 1 on socket 0 00:05:29.933 EAL: Detected lcore 2 as core 2 on socket 0 00:05:29.933 EAL: Detected lcore 3 as core 3 on socket 0 00:05:29.933 EAL: Detected lcore 4 as core 4 on socket 0 00:05:29.933 EAL: Detected lcore 5 as core 5 on socket 0 00:05:29.933 EAL: Detected lcore 6 as core 8 on socket 0 00:05:29.933 EAL: Detected lcore 7 as core 9 on socket 0 00:05:29.933 EAL: Detected lcore 8 as core 10 on socket 0 00:05:29.933 EAL: Detected lcore 9 as core 11 on socket 0 00:05:29.933 EAL: Detected lcore 10 as core 12 on socket 0 00:05:29.933 EAL: Detected lcore 11 as core 13 on socket 0 00:05:29.933 EAL: Detected lcore 12 as core 0 on socket 1 00:05:29.933 EAL: Detected lcore 13 as core 1 on socket 1 00:05:29.933 EAL: Detected lcore 14 as core 2 on socket 1 00:05:29.933 EAL: Detected lcore 15 as core 3 on socket 1 00:05:29.933 EAL: Detected lcore 16 as core 4 on socket 1 00:05:29.933 EAL: Detected lcore 17 as core 5 on socket 1 00:05:29.933 EAL: Detected lcore 18 as core 8 on socket 1 00:05:29.933 EAL: Detected lcore 19 as core 9 on socket 1 00:05:29.933 EAL: Detected lcore 20 as core 10 on socket 1 00:05:29.933 EAL: Detected lcore 21 as core 11 on socket 1 00:05:29.933 EAL: Detected lcore 22 as core 12 on socket 1 00:05:29.933 EAL: Detected lcore 23 as core 13 on socket 1 00:05:29.933 EAL: Detected lcore 24 as core 0 on socket 0 00:05:29.933 EAL: Detected lcore 25 as core 1 on socket 0 00:05:29.933 EAL: Detected lcore 26 as core 2 on socket 0 00:05:29.933 EAL: Detected lcore 27 as core 3 on socket 0 00:05:29.933 EAL: Detected lcore 28 as core 4 on socket 0 00:05:29.933 EAL: Detected lcore 29 as core 5 on socket 0 00:05:29.933 EAL: Detected lcore 30 as core 8 on socket 0 00:05:29.933 EAL: Detected lcore 31 as core 9 on socket 0 00:05:29.933 EAL: Detected lcore 32 as core 10 on socket 0 00:05:29.933 EAL: Detected lcore 33 as core 11 on socket 0 00:05:29.933 EAL: Detected lcore 34 as core 12 on socket 0 00:05:29.933 EAL: Detected lcore 35 as core 13 on socket 0 00:05:29.933 EAL: Detected lcore 36 as core 0 on socket 1 00:05:29.933 EAL: Detected lcore 37 as core 1 on socket 1 00:05:29.933 EAL: Detected lcore 38 as core 2 on socket 1 00:05:29.933 EAL: Detected lcore 39 as core 3 on socket 1 00:05:29.933 EAL: Detected lcore 40 as core 4 on socket 1 00:05:29.933 EAL: Detected lcore 41 as core 5 on socket 1 00:05:29.933 EAL: Detected lcore 42 as core 8 on socket 1 00:05:29.933 EAL: Detected lcore 43 as core 9 on socket 1 00:05:29.933 EAL: Detected lcore 44 as core 10 on socket 1 00:05:29.933 EAL: Detected lcore 45 as core 11 on socket 1 00:05:29.933 EAL: Detected lcore 46 as core 12 on socket 1 00:05:29.933 EAL: Detected lcore 47 as core 13 on socket 1 00:05:29.934 EAL: Maximum logical cores by configuration: 128 00:05:29.934 EAL: Detected CPU lcores: 48 00:05:29.934 EAL: Detected NUMA nodes: 2 00:05:29.934 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:29.934 EAL: Detected shared linkage of DPDK 00:05:29.934 EAL: No shared files mode enabled, IPC will be disabled 00:05:29.934 EAL: Bus pci wants IOVA as 'DC' 00:05:29.934 EAL: Buses did not request a specific IOVA mode. 00:05:29.934 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:29.934 EAL: Selected IOVA mode 'VA' 00:05:29.934 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.934 EAL: Probing VFIO support... 00:05:29.934 EAL: IOMMU type 1 (Type 1) is supported 00:05:29.934 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:29.934 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:29.934 EAL: VFIO support initialized 00:05:29.934 EAL: Ask a virtual area of 0x2e000 bytes 00:05:29.934 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:29.934 EAL: Setting up physically contiguous memory... 00:05:29.934 EAL: Setting maximum number of open files to 524288 00:05:29.934 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:29.934 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:29.934 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:29.934 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.934 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:29.934 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:29.934 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.934 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:29.934 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:29.934 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.934 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:29.934 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:29.934 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.934 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:29.934 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:29.934 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.934 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:29.934 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:29.934 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.934 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:29.934 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:29.934 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.934 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:29.934 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:29.934 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.934 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:29.934 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:29.934 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:29.934 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.934 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:29.934 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:29.934 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.934 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:29.934 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:29.934 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.934 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:29.934 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:29.934 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.934 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:29.934 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:29.934 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.934 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:29.934 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:29.934 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.934 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:29.934 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:29.934 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.934 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:29.934 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:29.934 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.934 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:29.934 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:29.934 EAL: Hugepages will be freed exactly as allocated. 00:05:29.934 EAL: No shared files mode enabled, IPC is disabled 00:05:29.934 EAL: No shared files mode enabled, IPC is disabled 00:05:29.934 EAL: TSC frequency is ~2700000 KHz 00:05:29.934 EAL: Main lcore 0 is ready (tid=7fde5745fa00;cpuset=[0]) 00:05:29.934 EAL: Trying to obtain current memory policy. 00:05:29.934 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.934 EAL: Restoring previous memory policy: 0 00:05:29.934 EAL: request: mp_malloc_sync 00:05:29.934 EAL: No shared files mode enabled, IPC is disabled 00:05:29.934 EAL: Heap on socket 0 was expanded by 2MB 00:05:29.934 EAL: No shared files mode enabled, IPC is disabled 00:05:29.934 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:29.934 EAL: Mem event callback 'spdk:(nil)' registered 00:05:29.934 00:05:29.934 00:05:29.934 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.934 http://cunit.sourceforge.net/ 00:05:29.934 00:05:29.934 00:05:29.934 Suite: components_suite 00:05:29.934 Test: vtophys_malloc_test ...passed 00:05:29.934 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:29.934 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.934 EAL: Restoring previous memory policy: 4 00:05:29.934 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.934 EAL: request: mp_malloc_sync 00:05:29.934 EAL: No shared files mode enabled, IPC is disabled 00:05:29.934 EAL: Heap on socket 0 was expanded by 4MB 00:05:29.934 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.934 EAL: request: mp_malloc_sync 00:05:29.934 EAL: No shared files mode enabled, IPC is disabled 00:05:29.934 EAL: Heap on socket 0 was shrunk by 4MB 00:05:29.934 EAL: Trying to obtain current memory policy. 00:05:29.934 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.934 EAL: Restoring previous memory policy: 4 00:05:29.934 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.934 EAL: request: mp_malloc_sync 00:05:29.934 EAL: No shared files mode enabled, IPC is disabled 00:05:29.934 EAL: Heap on socket 0 was expanded by 6MB 00:05:29.934 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.934 EAL: request: mp_malloc_sync 00:05:29.934 EAL: No shared files mode enabled, IPC is disabled 00:05:29.934 EAL: Heap on socket 0 was shrunk by 6MB 00:05:29.934 EAL: Trying to obtain current memory policy. 00:05:29.934 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.934 EAL: Restoring previous memory policy: 4 00:05:29.934 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.934 EAL: request: mp_malloc_sync 00:05:29.934 EAL: No shared files mode enabled, IPC is disabled 00:05:29.934 EAL: Heap on socket 0 was expanded by 10MB 00:05:29.934 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.934 EAL: request: mp_malloc_sync 00:05:29.934 EAL: No shared files mode enabled, IPC is disabled 00:05:29.934 EAL: Heap on socket 0 was shrunk by 10MB 00:05:29.934 EAL: Trying to obtain current memory policy. 00:05:29.934 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.934 EAL: Restoring previous memory policy: 4 00:05:29.934 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.934 EAL: request: mp_malloc_sync 00:05:29.934 EAL: No shared files mode enabled, IPC is disabled 00:05:29.934 EAL: Heap on socket 0 was expanded by 18MB 00:05:29.934 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.934 EAL: request: mp_malloc_sync 00:05:29.934 EAL: No shared files mode enabled, IPC is disabled 00:05:29.934 EAL: Heap on socket 0 was shrunk by 18MB 00:05:29.934 EAL: Trying to obtain current memory policy. 00:05:29.934 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.934 EAL: Restoring previous memory policy: 4 00:05:29.934 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.934 EAL: request: mp_malloc_sync 00:05:29.934 EAL: No shared files mode enabled, IPC is disabled 00:05:29.934 EAL: Heap on socket 0 was expanded by 34MB 00:05:29.934 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.934 EAL: request: mp_malloc_sync 00:05:29.934 EAL: No shared files mode enabled, IPC is disabled 00:05:29.934 EAL: Heap on socket 0 was shrunk by 34MB 00:05:29.934 EAL: Trying to obtain current memory policy. 00:05:29.934 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.934 EAL: Restoring previous memory policy: 4 00:05:29.934 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.934 EAL: request: mp_malloc_sync 00:05:29.934 EAL: No shared files mode enabled, IPC is disabled 00:05:29.934 EAL: Heap on socket 0 was expanded by 66MB 00:05:29.934 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.934 EAL: request: mp_malloc_sync 00:05:29.934 EAL: No shared files mode enabled, IPC is disabled 00:05:29.934 EAL: Heap on socket 0 was shrunk by 66MB 00:05:29.934 EAL: Trying to obtain current memory policy. 00:05:29.934 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.934 EAL: Restoring previous memory policy: 4 00:05:29.934 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.934 EAL: request: mp_malloc_sync 00:05:29.934 EAL: No shared files mode enabled, IPC is disabled 00:05:29.934 EAL: Heap on socket 0 was expanded by 130MB 00:05:30.191 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.191 EAL: request: mp_malloc_sync 00:05:30.191 EAL: No shared files mode enabled, IPC is disabled 00:05:30.191 EAL: Heap on socket 0 was shrunk by 130MB 00:05:30.191 EAL: Trying to obtain current memory policy. 00:05:30.191 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.191 EAL: Restoring previous memory policy: 4 00:05:30.191 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.191 EAL: request: mp_malloc_sync 00:05:30.191 EAL: No shared files mode enabled, IPC is disabled 00:05:30.191 EAL: Heap on socket 0 was expanded by 258MB 00:05:30.191 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.191 EAL: request: mp_malloc_sync 00:05:30.191 EAL: No shared files mode enabled, IPC is disabled 00:05:30.191 EAL: Heap on socket 0 was shrunk by 258MB 00:05:30.191 EAL: Trying to obtain current memory policy. 00:05:30.191 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.447 EAL: Restoring previous memory policy: 4 00:05:30.447 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.447 EAL: request: mp_malloc_sync 00:05:30.447 EAL: No shared files mode enabled, IPC is disabled 00:05:30.447 EAL: Heap on socket 0 was expanded by 514MB 00:05:30.447 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.702 EAL: request: mp_malloc_sync 00:05:30.702 EAL: No shared files mode enabled, IPC is disabled 00:05:30.702 EAL: Heap on socket 0 was shrunk by 514MB 00:05:30.702 EAL: Trying to obtain current memory policy. 00:05:30.702 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.958 EAL: Restoring previous memory policy: 4 00:05:30.958 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.958 EAL: request: mp_malloc_sync 00:05:30.958 EAL: No shared files mode enabled, IPC is disabled 00:05:30.958 EAL: Heap on socket 0 was expanded by 1026MB 00:05:31.216 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.216 EAL: request: mp_malloc_sync 00:05:31.216 EAL: No shared files mode enabled, IPC is disabled 00:05:31.216 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:31.216 passed 00:05:31.216 00:05:31.216 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.216 suites 1 1 n/a 0 0 00:05:31.216 tests 2 2 2 0 0 00:05:31.216 asserts 497 497 497 0 n/a 00:05:31.216 00:05:31.216 Elapsed time = 1.310 seconds 00:05:31.216 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.216 EAL: request: mp_malloc_sync 00:05:31.216 EAL: No shared files mode enabled, IPC is disabled 00:05:31.216 EAL: Heap on socket 0 was shrunk by 2MB 00:05:31.216 EAL: No shared files mode enabled, IPC is disabled 00:05:31.216 EAL: No shared files mode enabled, IPC is disabled 00:05:31.216 EAL: No shared files mode enabled, IPC is disabled 00:05:31.216 00:05:31.216 real 0m1.427s 00:05:31.216 user 0m0.832s 00:05:31.216 sys 0m0.560s 00:05:31.216 13:44:26 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.216 13:44:26 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:31.216 ************************************ 00:05:31.216 END TEST env_vtophys 00:05:31.216 ************************************ 00:05:31.475 13:44:26 env -- common/autotest_common.sh@1142 -- # return 0 00:05:31.475 13:44:26 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:31.475 13:44:26 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.475 13:44:26 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.475 13:44:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.475 ************************************ 00:05:31.475 START TEST env_pci 00:05:31.475 ************************************ 00:05:31.475 13:44:26 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:31.475 00:05:31.475 00:05:31.475 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.475 http://cunit.sourceforge.net/ 00:05:31.475 00:05:31.475 00:05:31.475 Suite: pci 00:05:31.475 Test: pci_hook ...[2024-07-15 13:44:26.098909] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3630067 has claimed it 00:05:31.475 EAL: Cannot find device (10000:00:01.0) 00:05:31.475 EAL: Failed to attach device on primary process 00:05:31.475 passed 00:05:31.475 00:05:31.475 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.475 suites 1 1 n/a 0 0 00:05:31.475 tests 1 1 1 0 0 00:05:31.475 asserts 25 25 25 0 n/a 00:05:31.475 00:05:31.475 Elapsed time = 0.022 seconds 00:05:31.475 00:05:31.475 real 0m0.036s 00:05:31.475 user 0m0.011s 00:05:31.475 sys 0m0.025s 00:05:31.475 13:44:26 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.475 13:44:26 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:31.475 ************************************ 00:05:31.475 END TEST env_pci 00:05:31.475 ************************************ 00:05:31.475 13:44:26 env -- common/autotest_common.sh@1142 -- # return 0 00:05:31.475 13:44:26 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:31.475 13:44:26 env -- env/env.sh@15 -- # uname 00:05:31.475 13:44:26 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:31.475 13:44:26 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:31.475 13:44:26 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:31.475 13:44:26 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:31.475 13:44:26 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.475 13:44:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.475 ************************************ 00:05:31.475 START TEST env_dpdk_post_init 00:05:31.475 ************************************ 00:05:31.475 13:44:26 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:31.475 EAL: Detected CPU lcores: 48 00:05:31.475 EAL: Detected NUMA nodes: 2 00:05:31.475 EAL: Detected shared linkage of DPDK 00:05:31.475 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:31.475 EAL: Selected IOVA mode 'VA' 00:05:31.475 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.475 EAL: VFIO support initialized 00:05:31.475 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:31.475 EAL: Using IOMMU type 1 (Type 1) 00:05:31.475 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:31.475 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:31.475 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:31.758 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:31.758 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:31.758 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:31.758 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:31.758 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:31.758 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:31.758 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:31.758 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:31.758 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:31.758 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:31.758 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:31.758 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:31.758 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:32.694 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:82:00.0 (socket 1) 00:05:35.970 EAL: Releasing PCI mapped resource for 0000:82:00.0 00:05:35.970 EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x202001040000 00:05:35.970 Starting DPDK initialization... 00:05:35.970 Starting SPDK post initialization... 00:05:35.970 SPDK NVMe probe 00:05:35.970 Attaching to 0000:82:00.0 00:05:35.970 Attached to 0000:82:00.0 00:05:35.970 Cleaning up... 00:05:35.970 00:05:35.970 real 0m4.411s 00:05:35.970 user 0m3.283s 00:05:35.970 sys 0m0.192s 00:05:35.970 13:44:30 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.970 13:44:30 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:35.970 ************************************ 00:05:35.970 END TEST env_dpdk_post_init 00:05:35.970 ************************************ 00:05:35.970 13:44:30 env -- common/autotest_common.sh@1142 -- # return 0 00:05:35.970 13:44:30 env -- env/env.sh@26 -- # uname 00:05:35.970 13:44:30 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:35.970 13:44:30 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:35.970 13:44:30 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.970 13:44:30 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.970 13:44:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.970 ************************************ 00:05:35.970 START TEST env_mem_callbacks 00:05:35.970 ************************************ 00:05:35.970 13:44:30 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:35.970 EAL: Detected CPU lcores: 48 00:05:35.970 EAL: Detected NUMA nodes: 2 00:05:35.970 EAL: Detected shared linkage of DPDK 00:05:35.970 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:35.970 EAL: Selected IOVA mode 'VA' 00:05:35.970 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.970 EAL: VFIO support initialized 00:05:35.970 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:35.970 00:05:35.970 00:05:35.970 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.970 http://cunit.sourceforge.net/ 00:05:35.970 00:05:35.970 00:05:35.970 Suite: memory 00:05:35.970 Test: test ... 00:05:35.970 register 0x200000200000 2097152 00:05:35.970 malloc 3145728 00:05:35.970 register 0x200000400000 4194304 00:05:35.970 buf 0x200000500000 len 3145728 PASSED 00:05:35.970 malloc 64 00:05:35.970 buf 0x2000004fff40 len 64 PASSED 00:05:35.970 malloc 4194304 00:05:35.970 register 0x200000800000 6291456 00:05:35.970 buf 0x200000a00000 len 4194304 PASSED 00:05:35.970 free 0x200000500000 3145728 00:05:35.970 free 0x2000004fff40 64 00:05:35.970 unregister 0x200000400000 4194304 PASSED 00:05:35.970 free 0x200000a00000 4194304 00:05:35.970 unregister 0x200000800000 6291456 PASSED 00:05:35.970 malloc 8388608 00:05:35.970 register 0x200000400000 10485760 00:05:35.970 buf 0x200000600000 len 8388608 PASSED 00:05:35.970 free 0x200000600000 8388608 00:05:35.970 unregister 0x200000400000 10485760 PASSED 00:05:35.970 passed 00:05:35.970 00:05:35.970 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.970 suites 1 1 n/a 0 0 00:05:35.970 tests 1 1 1 0 0 00:05:35.970 asserts 15 15 15 0 n/a 00:05:35.970 00:05:35.970 Elapsed time = 0.005 seconds 00:05:35.970 00:05:35.970 real 0m0.049s 00:05:35.970 user 0m0.014s 00:05:35.970 sys 0m0.035s 00:05:35.970 13:44:30 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.970 13:44:30 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:35.970 ************************************ 00:05:35.970 END TEST env_mem_callbacks 00:05:35.970 ************************************ 00:05:35.970 13:44:30 env -- common/autotest_common.sh@1142 -- # return 0 00:05:35.970 00:05:35.970 real 0m6.380s 00:05:35.970 user 0m4.414s 00:05:35.970 sys 0m1.012s 00:05:35.970 13:44:30 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.970 13:44:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.970 ************************************ 00:05:35.970 END TEST env 00:05:35.970 ************************************ 00:05:35.970 13:44:30 -- common/autotest_common.sh@1142 -- # return 0 00:05:35.970 13:44:30 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:35.970 13:44:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.970 13:44:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.970 13:44:30 -- common/autotest_common.sh@10 -- # set +x 00:05:35.970 ************************************ 00:05:35.970 START TEST rpc 00:05:35.970 ************************************ 00:05:35.970 13:44:30 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:35.970 * Looking for test storage... 00:05:35.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:35.970 13:44:30 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3630725 00:05:35.970 13:44:30 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:35.970 13:44:30 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.970 13:44:30 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3630725 00:05:35.971 13:44:30 rpc -- common/autotest_common.sh@829 -- # '[' -z 3630725 ']' 00:05:35.971 13:44:30 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.971 13:44:30 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.971 13:44:30 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.971 13:44:30 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.971 13:44:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.228 [2024-07-15 13:44:30.853045] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:05:36.228 [2024-07-15 13:44:30.853137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3630725 ] 00:05:36.228 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.228 [2024-07-15 13:44:30.910753] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.229 [2024-07-15 13:44:31.022212] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:36.229 [2024-07-15 13:44:31.022266] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3630725' to capture a snapshot of events at runtime. 00:05:36.229 [2024-07-15 13:44:31.022294] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:36.229 [2024-07-15 13:44:31.022305] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:36.229 [2024-07-15 13:44:31.022315] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3630725 for offline analysis/debug. 00:05:36.229 [2024-07-15 13:44:31.022347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.487 13:44:31 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.487 13:44:31 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:36.487 13:44:31 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:36.487 13:44:31 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:36.487 13:44:31 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:36.487 13:44:31 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:36.487 13:44:31 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.487 13:44:31 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.487 13:44:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.487 ************************************ 00:05:36.487 START TEST rpc_integrity 00:05:36.487 ************************************ 00:05:36.487 13:44:31 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:36.487 13:44:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:36.487 13:44:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.487 13:44:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.487 13:44:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.487 13:44:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:36.487 13:44:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:36.745 13:44:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:36.745 13:44:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:36.745 13:44:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.745 13:44:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.745 13:44:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.745 13:44:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:36.745 13:44:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:36.745 13:44:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.745 13:44:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.745 13:44:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.745 13:44:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:36.745 { 00:05:36.745 "name": "Malloc0", 00:05:36.745 "aliases": [ 00:05:36.745 "866a4497-7539-4714-bd42-bf71433e8f75" 00:05:36.745 ], 00:05:36.745 "product_name": "Malloc disk", 00:05:36.745 "block_size": 512, 00:05:36.745 "num_blocks": 16384, 00:05:36.745 "uuid": "866a4497-7539-4714-bd42-bf71433e8f75", 00:05:36.745 "assigned_rate_limits": { 00:05:36.745 "rw_ios_per_sec": 0, 00:05:36.745 "rw_mbytes_per_sec": 0, 00:05:36.745 "r_mbytes_per_sec": 0, 00:05:36.745 "w_mbytes_per_sec": 0 00:05:36.745 }, 00:05:36.745 "claimed": false, 00:05:36.745 "zoned": false, 00:05:36.745 "supported_io_types": { 00:05:36.745 "read": true, 00:05:36.745 "write": true, 00:05:36.745 "unmap": true, 00:05:36.745 "flush": true, 00:05:36.745 "reset": true, 00:05:36.745 "nvme_admin": false, 00:05:36.746 "nvme_io": false, 00:05:36.746 "nvme_io_md": false, 00:05:36.746 "write_zeroes": true, 00:05:36.746 "zcopy": true, 00:05:36.746 "get_zone_info": false, 00:05:36.746 "zone_management": false, 00:05:36.746 "zone_append": false, 00:05:36.746 "compare": false, 00:05:36.746 "compare_and_write": false, 00:05:36.746 "abort": true, 00:05:36.746 "seek_hole": false, 00:05:36.746 "seek_data": false, 00:05:36.746 "copy": true, 00:05:36.746 "nvme_iov_md": false 00:05:36.746 }, 00:05:36.746 "memory_domains": [ 00:05:36.746 { 00:05:36.746 "dma_device_id": "system", 00:05:36.746 "dma_device_type": 1 00:05:36.746 }, 00:05:36.746 { 00:05:36.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.746 "dma_device_type": 2 00:05:36.746 } 00:05:36.746 ], 00:05:36.746 "driver_specific": {} 00:05:36.746 } 00:05:36.746 ]' 00:05:36.746 13:44:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:36.746 13:44:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:36.746 13:44:31 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:36.746 13:44:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.746 13:44:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.746 [2024-07-15 13:44:31.408813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:36.746 [2024-07-15 13:44:31.408870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:36.746 [2024-07-15 13:44:31.408891] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8373e0 00:05:36.746 [2024-07-15 13:44:31.408904] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:36.746 [2024-07-15 13:44:31.410160] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:36.746 [2024-07-15 13:44:31.410182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:36.746 Passthru0 00:05:36.746 13:44:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.746 13:44:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:36.746 13:44:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.746 13:44:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.746 13:44:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.746 13:44:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:36.746 { 00:05:36.746 "name": "Malloc0", 00:05:36.746 "aliases": [ 00:05:36.746 "866a4497-7539-4714-bd42-bf71433e8f75" 00:05:36.746 ], 00:05:36.746 "product_name": "Malloc disk", 00:05:36.746 "block_size": 512, 00:05:36.746 "num_blocks": 16384, 00:05:36.746 "uuid": "866a4497-7539-4714-bd42-bf71433e8f75", 00:05:36.746 "assigned_rate_limits": { 00:05:36.746 "rw_ios_per_sec": 0, 00:05:36.746 "rw_mbytes_per_sec": 0, 00:05:36.746 "r_mbytes_per_sec": 0, 00:05:36.746 "w_mbytes_per_sec": 0 00:05:36.746 }, 00:05:36.746 "claimed": true, 00:05:36.746 "claim_type": "exclusive_write", 00:05:36.746 "zoned": false, 00:05:36.746 "supported_io_types": { 00:05:36.746 "read": true, 00:05:36.746 "write": true, 00:05:36.746 "unmap": true, 00:05:36.746 "flush": true, 00:05:36.746 "reset": true, 00:05:36.746 "nvme_admin": false, 00:05:36.746 "nvme_io": false, 00:05:36.746 "nvme_io_md": false, 00:05:36.746 "write_zeroes": true, 00:05:36.746 "zcopy": true, 00:05:36.746 "get_zone_info": false, 00:05:36.746 "zone_management": false, 00:05:36.746 "zone_append": false, 00:05:36.746 "compare": false, 00:05:36.746 "compare_and_write": false, 00:05:36.746 "abort": true, 00:05:36.746 "seek_hole": false, 00:05:36.746 "seek_data": false, 00:05:36.746 "copy": true, 00:05:36.746 "nvme_iov_md": false 00:05:36.746 }, 00:05:36.746 "memory_domains": [ 00:05:36.746 { 00:05:36.746 "dma_device_id": "system", 00:05:36.746 "dma_device_type": 1 00:05:36.746 }, 00:05:36.746 { 00:05:36.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.746 "dma_device_type": 2 00:05:36.746 } 00:05:36.746 ], 00:05:36.746 "driver_specific": {} 00:05:36.746 }, 00:05:36.746 { 00:05:36.746 "name": "Passthru0", 00:05:36.746 "aliases": [ 00:05:36.746 "949f6b42-ed65-568c-91d7-c908867aaaf7" 00:05:36.746 ], 00:05:36.746 "product_name": "passthru", 00:05:36.746 "block_size": 512, 00:05:36.746 "num_blocks": 16384, 00:05:36.746 "uuid": "949f6b42-ed65-568c-91d7-c908867aaaf7", 00:05:36.746 "assigned_rate_limits": { 00:05:36.746 "rw_ios_per_sec": 0, 00:05:36.746 "rw_mbytes_per_sec": 0, 00:05:36.746 "r_mbytes_per_sec": 0, 00:05:36.746 "w_mbytes_per_sec": 0 00:05:36.746 }, 00:05:36.746 "claimed": false, 00:05:36.746 "zoned": false, 00:05:36.746 "supported_io_types": { 00:05:36.746 "read": true, 00:05:36.746 "write": true, 00:05:36.746 "unmap": true, 00:05:36.746 "flush": true, 00:05:36.746 "reset": true, 00:05:36.746 "nvme_admin": false, 00:05:36.746 "nvme_io": false, 00:05:36.746 "nvme_io_md": false, 00:05:36.746 "write_zeroes": true, 00:05:36.746 "zcopy": true, 00:05:36.746 "get_zone_info": false, 00:05:36.746 "zone_management": false, 00:05:36.746 "zone_append": false, 00:05:36.746 "compare": false, 00:05:36.746 "compare_and_write": false, 00:05:36.746 "abort": true, 00:05:36.746 "seek_hole": false, 00:05:36.746 "seek_data": false, 00:05:36.746 "copy": true, 00:05:36.746 "nvme_iov_md": false 00:05:36.746 }, 00:05:36.746 "memory_domains": [ 00:05:36.746 { 00:05:36.746 "dma_device_id": "system", 00:05:36.746 "dma_device_type": 1 00:05:36.746 }, 00:05:36.746 { 00:05:36.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.746 "dma_device_type": 2 00:05:36.746 } 00:05:36.746 ], 00:05:36.746 "driver_specific": { 00:05:36.746 "passthru": { 00:05:36.746 "name": "Passthru0", 00:05:36.746 "base_bdev_name": "Malloc0" 00:05:36.746 } 00:05:36.746 } 00:05:36.746 } 00:05:36.746 ]' 00:05:36.746 13:44:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:36.746 13:44:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:36.746 13:44:31 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:36.746 13:44:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.746 13:44:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.746 13:44:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.746 13:44:31 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:36.746 13:44:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.746 13:44:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.746 13:44:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.746 13:44:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:36.746 13:44:31 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.746 13:44:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.746 13:44:31 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.746 13:44:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:36.746 13:44:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:36.746 13:44:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:36.746 00:05:36.746 real 0m0.237s 00:05:36.746 user 0m0.154s 00:05:36.746 sys 0m0.025s 00:05:36.746 13:44:31 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.746 13:44:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.746 ************************************ 00:05:36.746 END TEST rpc_integrity 00:05:36.746 ************************************ 00:05:36.746 13:44:31 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:36.746 13:44:31 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:36.746 13:44:31 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.746 13:44:31 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.746 13:44:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.746 ************************************ 00:05:36.746 START TEST rpc_plugins 00:05:36.746 ************************************ 00:05:36.746 13:44:31 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:36.746 13:44:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:36.746 13:44:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.746 13:44:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.004 13:44:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.004 13:44:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:37.004 13:44:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:37.004 13:44:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.004 13:44:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.004 13:44:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.004 13:44:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:37.004 { 00:05:37.004 "name": "Malloc1", 00:05:37.004 "aliases": [ 00:05:37.005 "6c7b38ee-420c-45be-89b2-da9894485876" 00:05:37.005 ], 00:05:37.005 "product_name": "Malloc disk", 00:05:37.005 "block_size": 4096, 00:05:37.005 "num_blocks": 256, 00:05:37.005 "uuid": "6c7b38ee-420c-45be-89b2-da9894485876", 00:05:37.005 "assigned_rate_limits": { 00:05:37.005 "rw_ios_per_sec": 0, 00:05:37.005 "rw_mbytes_per_sec": 0, 00:05:37.005 "r_mbytes_per_sec": 0, 00:05:37.005 "w_mbytes_per_sec": 0 00:05:37.005 }, 00:05:37.005 "claimed": false, 00:05:37.005 "zoned": false, 00:05:37.005 "supported_io_types": { 00:05:37.005 "read": true, 00:05:37.005 "write": true, 00:05:37.005 "unmap": true, 00:05:37.005 "flush": true, 00:05:37.005 "reset": true, 00:05:37.005 "nvme_admin": false, 00:05:37.005 "nvme_io": false, 00:05:37.005 "nvme_io_md": false, 00:05:37.005 "write_zeroes": true, 00:05:37.005 "zcopy": true, 00:05:37.005 "get_zone_info": false, 00:05:37.005 "zone_management": false, 00:05:37.005 "zone_append": false, 00:05:37.005 "compare": false, 00:05:37.005 "compare_and_write": false, 00:05:37.005 "abort": true, 00:05:37.005 "seek_hole": false, 00:05:37.005 "seek_data": false, 00:05:37.005 "copy": true, 00:05:37.005 "nvme_iov_md": false 00:05:37.005 }, 00:05:37.005 "memory_domains": [ 00:05:37.005 { 00:05:37.005 "dma_device_id": "system", 00:05:37.005 "dma_device_type": 1 00:05:37.005 }, 00:05:37.005 { 00:05:37.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.005 "dma_device_type": 2 00:05:37.005 } 00:05:37.005 ], 00:05:37.005 "driver_specific": {} 00:05:37.005 } 00:05:37.005 ]' 00:05:37.005 13:44:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:37.005 13:44:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:37.005 13:44:31 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:37.005 13:44:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.005 13:44:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.005 13:44:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.005 13:44:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:37.005 13:44:31 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.005 13:44:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.005 13:44:31 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.005 13:44:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:37.005 13:44:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:37.005 13:44:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:37.005 00:05:37.005 real 0m0.106s 00:05:37.005 user 0m0.069s 00:05:37.005 sys 0m0.008s 00:05:37.005 13:44:31 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.005 13:44:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.005 ************************************ 00:05:37.005 END TEST rpc_plugins 00:05:37.005 ************************************ 00:05:37.005 13:44:31 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:37.005 13:44:31 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:37.005 13:44:31 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.005 13:44:31 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.005 13:44:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.005 ************************************ 00:05:37.005 START TEST rpc_trace_cmd_test 00:05:37.005 ************************************ 00:05:37.005 13:44:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:37.005 13:44:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:37.005 13:44:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:37.005 13:44:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.005 13:44:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:37.005 13:44:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.005 13:44:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:37.005 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3630725", 00:05:37.005 "tpoint_group_mask": "0x8", 00:05:37.005 "iscsi_conn": { 00:05:37.005 "mask": "0x2", 00:05:37.005 "tpoint_mask": "0x0" 00:05:37.005 }, 00:05:37.005 "scsi": { 00:05:37.005 "mask": "0x4", 00:05:37.005 "tpoint_mask": "0x0" 00:05:37.005 }, 00:05:37.005 "bdev": { 00:05:37.005 "mask": "0x8", 00:05:37.005 "tpoint_mask": "0xffffffffffffffff" 00:05:37.005 }, 00:05:37.005 "nvmf_rdma": { 00:05:37.005 "mask": "0x10", 00:05:37.005 "tpoint_mask": "0x0" 00:05:37.005 }, 00:05:37.005 "nvmf_tcp": { 00:05:37.005 "mask": "0x20", 00:05:37.005 "tpoint_mask": "0x0" 00:05:37.005 }, 00:05:37.005 "ftl": { 00:05:37.005 "mask": "0x40", 00:05:37.005 "tpoint_mask": "0x0" 00:05:37.005 }, 00:05:37.005 "blobfs": { 00:05:37.005 "mask": "0x80", 00:05:37.005 "tpoint_mask": "0x0" 00:05:37.005 }, 00:05:37.005 "dsa": { 00:05:37.005 "mask": "0x200", 00:05:37.005 "tpoint_mask": "0x0" 00:05:37.005 }, 00:05:37.005 "thread": { 00:05:37.005 "mask": "0x400", 00:05:37.005 "tpoint_mask": "0x0" 00:05:37.005 }, 00:05:37.005 "nvme_pcie": { 00:05:37.005 "mask": "0x800", 00:05:37.005 "tpoint_mask": "0x0" 00:05:37.005 }, 00:05:37.005 "iaa": { 00:05:37.005 "mask": "0x1000", 00:05:37.005 "tpoint_mask": "0x0" 00:05:37.005 }, 00:05:37.005 "nvme_tcp": { 00:05:37.005 "mask": "0x2000", 00:05:37.005 "tpoint_mask": "0x0" 00:05:37.005 }, 00:05:37.005 "bdev_nvme": { 00:05:37.005 "mask": "0x4000", 00:05:37.005 "tpoint_mask": "0x0" 00:05:37.005 }, 00:05:37.005 "sock": { 00:05:37.005 "mask": "0x8000", 00:05:37.005 "tpoint_mask": "0x0" 00:05:37.005 } 00:05:37.005 }' 00:05:37.005 13:44:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:37.005 13:44:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:37.005 13:44:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:37.005 13:44:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:37.005 13:44:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:37.005 13:44:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:37.005 13:44:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:37.264 13:44:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:37.264 13:44:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:37.264 13:44:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:37.264 00:05:37.264 real 0m0.184s 00:05:37.264 user 0m0.159s 00:05:37.264 sys 0m0.018s 00:05:37.264 13:44:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.264 13:44:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:37.264 ************************************ 00:05:37.264 END TEST rpc_trace_cmd_test 00:05:37.264 ************************************ 00:05:37.264 13:44:31 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:37.264 13:44:31 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:37.264 13:44:31 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:37.264 13:44:31 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:37.264 13:44:31 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.264 13:44:31 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.264 13:44:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.264 ************************************ 00:05:37.264 START TEST rpc_daemon_integrity 00:05:37.264 ************************************ 00:05:37.264 13:44:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:37.264 13:44:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:37.264 13:44:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.264 13:44:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.264 13:44:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.264 13:44:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:37.264 13:44:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:37.264 13:44:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:37.264 13:44:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:37.264 13:44:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.264 13:44:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.264 13:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.264 13:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:37.264 13:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:37.264 13:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.264 13:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.264 13:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.264 13:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:37.264 { 00:05:37.264 "name": "Malloc2", 00:05:37.264 "aliases": [ 00:05:37.264 "fc96c5c0-e8e7-4e05-b351-f5b69421d2e4" 00:05:37.264 ], 00:05:37.264 "product_name": "Malloc disk", 00:05:37.264 "block_size": 512, 00:05:37.264 "num_blocks": 16384, 00:05:37.264 "uuid": "fc96c5c0-e8e7-4e05-b351-f5b69421d2e4", 00:05:37.264 "assigned_rate_limits": { 00:05:37.264 "rw_ios_per_sec": 0, 00:05:37.264 "rw_mbytes_per_sec": 0, 00:05:37.264 "r_mbytes_per_sec": 0, 00:05:37.264 "w_mbytes_per_sec": 0 00:05:37.264 }, 00:05:37.264 "claimed": false, 00:05:37.264 "zoned": false, 00:05:37.264 "supported_io_types": { 00:05:37.264 "read": true, 00:05:37.264 "write": true, 00:05:37.264 "unmap": true, 00:05:37.264 "flush": true, 00:05:37.264 "reset": true, 00:05:37.264 "nvme_admin": false, 00:05:37.264 "nvme_io": false, 00:05:37.264 "nvme_io_md": false, 00:05:37.264 "write_zeroes": true, 00:05:37.264 "zcopy": true, 00:05:37.264 "get_zone_info": false, 00:05:37.264 "zone_management": false, 00:05:37.264 "zone_append": false, 00:05:37.264 "compare": false, 00:05:37.264 "compare_and_write": false, 00:05:37.264 "abort": true, 00:05:37.264 "seek_hole": false, 00:05:37.264 "seek_data": false, 00:05:37.264 "copy": true, 00:05:37.264 "nvme_iov_md": false 00:05:37.264 }, 00:05:37.264 "memory_domains": [ 00:05:37.264 { 00:05:37.264 "dma_device_id": "system", 00:05:37.264 "dma_device_type": 1 00:05:37.264 }, 00:05:37.264 { 00:05:37.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.264 "dma_device_type": 2 00:05:37.264 } 00:05:37.264 ], 00:05:37.264 "driver_specific": {} 00:05:37.264 } 00:05:37.264 ]' 00:05:37.264 13:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:37.264 13:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:37.264 13:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:37.264 13:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.264 13:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.264 [2024-07-15 13:44:32.054637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:37.264 [2024-07-15 13:44:32.054692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:37.264 [2024-07-15 13:44:32.054713] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8d52f0 00:05:37.264 [2024-07-15 13:44:32.054725] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:37.264 [2024-07-15 13:44:32.055970] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:37.264 [2024-07-15 13:44:32.055996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:37.264 Passthru0 00:05:37.264 13:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.264 13:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:37.264 13:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.264 13:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.264 13:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.264 13:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:37.264 { 00:05:37.264 "name": "Malloc2", 00:05:37.264 "aliases": [ 00:05:37.264 "fc96c5c0-e8e7-4e05-b351-f5b69421d2e4" 00:05:37.264 ], 00:05:37.264 "product_name": "Malloc disk", 00:05:37.264 "block_size": 512, 00:05:37.264 "num_blocks": 16384, 00:05:37.264 "uuid": "fc96c5c0-e8e7-4e05-b351-f5b69421d2e4", 00:05:37.264 "assigned_rate_limits": { 00:05:37.264 "rw_ios_per_sec": 0, 00:05:37.264 "rw_mbytes_per_sec": 0, 00:05:37.264 "r_mbytes_per_sec": 0, 00:05:37.264 "w_mbytes_per_sec": 0 00:05:37.264 }, 00:05:37.264 "claimed": true, 00:05:37.264 "claim_type": "exclusive_write", 00:05:37.264 "zoned": false, 00:05:37.265 "supported_io_types": { 00:05:37.265 "read": true, 00:05:37.265 "write": true, 00:05:37.265 "unmap": true, 00:05:37.265 "flush": true, 00:05:37.265 "reset": true, 00:05:37.265 "nvme_admin": false, 00:05:37.265 "nvme_io": false, 00:05:37.265 "nvme_io_md": false, 00:05:37.265 "write_zeroes": true, 00:05:37.265 "zcopy": true, 00:05:37.265 "get_zone_info": false, 00:05:37.265 "zone_management": false, 00:05:37.265 "zone_append": false, 00:05:37.265 "compare": false, 00:05:37.265 "compare_and_write": false, 00:05:37.265 "abort": true, 00:05:37.265 "seek_hole": false, 00:05:37.265 "seek_data": false, 00:05:37.265 "copy": true, 00:05:37.265 "nvme_iov_md": false 00:05:37.265 }, 00:05:37.265 "memory_domains": [ 00:05:37.265 { 00:05:37.265 "dma_device_id": "system", 00:05:37.265 "dma_device_type": 1 00:05:37.265 }, 00:05:37.265 { 00:05:37.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.265 "dma_device_type": 2 00:05:37.265 } 00:05:37.265 ], 00:05:37.265 "driver_specific": {} 00:05:37.265 }, 00:05:37.265 { 00:05:37.265 "name": "Passthru0", 00:05:37.265 "aliases": [ 00:05:37.265 "6ce641bf-ca0b-5606-b6c1-680ad129d130" 00:05:37.265 ], 00:05:37.265 "product_name": "passthru", 00:05:37.265 "block_size": 512, 00:05:37.265 "num_blocks": 16384, 00:05:37.265 "uuid": "6ce641bf-ca0b-5606-b6c1-680ad129d130", 00:05:37.265 "assigned_rate_limits": { 00:05:37.265 "rw_ios_per_sec": 0, 00:05:37.265 "rw_mbytes_per_sec": 0, 00:05:37.265 "r_mbytes_per_sec": 0, 00:05:37.265 "w_mbytes_per_sec": 0 00:05:37.265 }, 00:05:37.265 "claimed": false, 00:05:37.265 "zoned": false, 00:05:37.265 "supported_io_types": { 00:05:37.265 "read": true, 00:05:37.265 "write": true, 00:05:37.265 "unmap": true, 00:05:37.265 "flush": true, 00:05:37.265 "reset": true, 00:05:37.265 "nvme_admin": false, 00:05:37.265 "nvme_io": false, 00:05:37.265 "nvme_io_md": false, 00:05:37.265 "write_zeroes": true, 00:05:37.265 "zcopy": true, 00:05:37.265 "get_zone_info": false, 00:05:37.265 "zone_management": false, 00:05:37.265 "zone_append": false, 00:05:37.265 "compare": false, 00:05:37.265 "compare_and_write": false, 00:05:37.265 "abort": true, 00:05:37.265 "seek_hole": false, 00:05:37.265 "seek_data": false, 00:05:37.265 "copy": true, 00:05:37.265 "nvme_iov_md": false 00:05:37.265 }, 00:05:37.265 "memory_domains": [ 00:05:37.265 { 00:05:37.265 "dma_device_id": "system", 00:05:37.265 "dma_device_type": 1 00:05:37.265 }, 00:05:37.265 { 00:05:37.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.265 "dma_device_type": 2 00:05:37.265 } 00:05:37.265 ], 00:05:37.265 "driver_specific": { 00:05:37.265 "passthru": { 00:05:37.265 "name": "Passthru0", 00:05:37.265 "base_bdev_name": "Malloc2" 00:05:37.265 } 00:05:37.265 } 00:05:37.265 } 00:05:37.265 ]' 00:05:37.265 13:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:37.523 13:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:37.523 13:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:37.523 13:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.523 13:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.523 13:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.523 13:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:37.523 13:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.523 13:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.523 13:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.523 13:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:37.523 13:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.523 13:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.523 13:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.523 13:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:37.523 13:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:37.523 13:44:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:37.523 00:05:37.523 real 0m0.215s 00:05:37.523 user 0m0.131s 00:05:37.523 sys 0m0.028s 00:05:37.523 13:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.523 13:44:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.523 ************************************ 00:05:37.523 END TEST rpc_daemon_integrity 00:05:37.523 ************************************ 00:05:37.523 13:44:32 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:37.523 13:44:32 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:37.523 13:44:32 rpc -- rpc/rpc.sh@84 -- # killprocess 3630725 00:05:37.523 13:44:32 rpc -- common/autotest_common.sh@948 -- # '[' -z 3630725 ']' 00:05:37.523 13:44:32 rpc -- common/autotest_common.sh@952 -- # kill -0 3630725 00:05:37.523 13:44:32 rpc -- common/autotest_common.sh@953 -- # uname 00:05:37.523 13:44:32 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:37.523 13:44:32 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3630725 00:05:37.523 13:44:32 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:37.523 13:44:32 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:37.523 13:44:32 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3630725' 00:05:37.523 killing process with pid 3630725 00:05:37.523 13:44:32 rpc -- common/autotest_common.sh@967 -- # kill 3630725 00:05:37.523 13:44:32 rpc -- common/autotest_common.sh@972 -- # wait 3630725 00:05:38.089 00:05:38.089 real 0m1.889s 00:05:38.089 user 0m2.345s 00:05:38.089 sys 0m0.583s 00:05:38.089 13:44:32 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.089 13:44:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.089 ************************************ 00:05:38.089 END TEST rpc 00:05:38.089 ************************************ 00:05:38.089 13:44:32 -- common/autotest_common.sh@1142 -- # return 0 00:05:38.089 13:44:32 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:38.089 13:44:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.089 13:44:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.089 13:44:32 -- common/autotest_common.sh@10 -- # set +x 00:05:38.089 ************************************ 00:05:38.089 START TEST skip_rpc 00:05:38.089 ************************************ 00:05:38.089 13:44:32 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:38.089 * Looking for test storage... 00:05:38.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:38.089 13:44:32 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:38.089 13:44:32 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:38.089 13:44:32 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:38.089 13:44:32 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.089 13:44:32 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.089 13:44:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.089 ************************************ 00:05:38.089 START TEST skip_rpc 00:05:38.089 ************************************ 00:05:38.089 13:44:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:38.089 13:44:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3631162 00:05:38.089 13:44:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:38.089 13:44:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.089 13:44:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:38.089 [2024-07-15 13:44:32.823413] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:05:38.089 [2024-07-15 13:44:32.823487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3631162 ] 00:05:38.089 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.089 [2024-07-15 13:44:32.878264] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.347 [2024-07-15 13:44:32.980652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.604 13:44:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:43.604 13:44:37 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:43.604 13:44:37 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:43.604 13:44:37 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:43.604 13:44:37 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.604 13:44:37 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:43.604 13:44:37 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.604 13:44:37 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:43.604 13:44:37 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.605 13:44:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.605 13:44:37 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:43.605 13:44:37 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:43.605 13:44:37 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:43.605 13:44:37 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:43.605 13:44:37 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:43.605 13:44:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:43.605 13:44:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3631162 00:05:43.605 13:44:37 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 3631162 ']' 00:05:43.605 13:44:37 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 3631162 00:05:43.605 13:44:37 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:43.605 13:44:37 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.605 13:44:37 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3631162 00:05:43.605 13:44:37 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:43.605 13:44:37 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:43.605 13:44:37 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3631162' 00:05:43.605 killing process with pid 3631162 00:05:43.605 13:44:37 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 3631162 00:05:43.605 13:44:37 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 3631162 00:05:43.605 00:05:43.605 real 0m5.457s 00:05:43.605 user 0m5.161s 00:05:43.605 sys 0m0.301s 00:05:43.605 13:44:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.605 13:44:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.605 ************************************ 00:05:43.605 END TEST skip_rpc 00:05:43.605 ************************************ 00:05:43.605 13:44:38 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:43.605 13:44:38 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:43.605 13:44:38 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.605 13:44:38 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.605 13:44:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.605 ************************************ 00:05:43.605 START TEST skip_rpc_with_json 00:05:43.605 ************************************ 00:05:43.605 13:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:43.605 13:44:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:43.605 13:44:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3631849 00:05:43.605 13:44:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.605 13:44:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.605 13:44:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3631849 00:05:43.605 13:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 3631849 ']' 00:05:43.605 13:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.605 13:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.605 13:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.605 13:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.605 13:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:43.605 [2024-07-15 13:44:38.332708] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:05:43.605 [2024-07-15 13:44:38.332807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3631849 ] 00:05:43.605 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.605 [2024-07-15 13:44:38.389858] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.862 [2024-07-15 13:44:38.490477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.119 13:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.119 13:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:44.119 13:44:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:44.119 13:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.119 13:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:44.119 [2024-07-15 13:44:38.738001] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:44.119 request: 00:05:44.119 { 00:05:44.119 "trtype": "tcp", 00:05:44.119 "method": "nvmf_get_transports", 00:05:44.119 "req_id": 1 00:05:44.119 } 00:05:44.119 Got JSON-RPC error response 00:05:44.119 response: 00:05:44.119 { 00:05:44.119 "code": -19, 00:05:44.119 "message": "No such device" 00:05:44.119 } 00:05:44.119 13:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:44.119 13:44:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:44.119 13:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.119 13:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:44.119 [2024-07-15 13:44:38.746146] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:44.119 13:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.119 13:44:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:44.119 13:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.119 13:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:44.119 13:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.119 13:44:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:44.119 { 00:05:44.119 "subsystems": [ 00:05:44.119 { 00:05:44.119 "subsystem": "vfio_user_target", 00:05:44.119 "config": null 00:05:44.119 }, 00:05:44.119 { 00:05:44.119 "subsystem": "keyring", 00:05:44.119 "config": [] 00:05:44.119 }, 00:05:44.119 { 00:05:44.119 "subsystem": "iobuf", 00:05:44.119 "config": [ 00:05:44.119 { 00:05:44.119 "method": "iobuf_set_options", 00:05:44.119 "params": { 00:05:44.119 "small_pool_count": 8192, 00:05:44.119 "large_pool_count": 1024, 00:05:44.119 "small_bufsize": 8192, 00:05:44.119 "large_bufsize": 135168 00:05:44.119 } 00:05:44.119 } 00:05:44.119 ] 00:05:44.119 }, 00:05:44.119 { 00:05:44.119 "subsystem": "sock", 00:05:44.119 "config": [ 00:05:44.119 { 00:05:44.119 "method": "sock_set_default_impl", 00:05:44.119 "params": { 00:05:44.119 "impl_name": "posix" 00:05:44.119 } 00:05:44.119 }, 00:05:44.119 { 00:05:44.119 "method": "sock_impl_set_options", 00:05:44.119 "params": { 00:05:44.119 "impl_name": "ssl", 00:05:44.119 "recv_buf_size": 4096, 00:05:44.119 "send_buf_size": 4096, 00:05:44.119 "enable_recv_pipe": true, 00:05:44.119 "enable_quickack": false, 00:05:44.119 "enable_placement_id": 0, 00:05:44.119 "enable_zerocopy_send_server": true, 00:05:44.119 "enable_zerocopy_send_client": false, 00:05:44.119 "zerocopy_threshold": 0, 00:05:44.119 "tls_version": 0, 00:05:44.119 "enable_ktls": false 00:05:44.119 } 00:05:44.119 }, 00:05:44.119 { 00:05:44.119 "method": "sock_impl_set_options", 00:05:44.119 "params": { 00:05:44.119 "impl_name": "posix", 00:05:44.119 "recv_buf_size": 2097152, 00:05:44.119 "send_buf_size": 2097152, 00:05:44.119 "enable_recv_pipe": true, 00:05:44.119 "enable_quickack": false, 00:05:44.119 "enable_placement_id": 0, 00:05:44.119 "enable_zerocopy_send_server": true, 00:05:44.119 "enable_zerocopy_send_client": false, 00:05:44.119 "zerocopy_threshold": 0, 00:05:44.119 "tls_version": 0, 00:05:44.119 "enable_ktls": false 00:05:44.119 } 00:05:44.119 } 00:05:44.119 ] 00:05:44.119 }, 00:05:44.119 { 00:05:44.119 "subsystem": "vmd", 00:05:44.119 "config": [] 00:05:44.119 }, 00:05:44.119 { 00:05:44.119 "subsystem": "accel", 00:05:44.119 "config": [ 00:05:44.119 { 00:05:44.119 "method": "accel_set_options", 00:05:44.119 "params": { 00:05:44.119 "small_cache_size": 128, 00:05:44.119 "large_cache_size": 16, 00:05:44.119 "task_count": 2048, 00:05:44.119 "sequence_count": 2048, 00:05:44.119 "buf_count": 2048 00:05:44.119 } 00:05:44.119 } 00:05:44.119 ] 00:05:44.119 }, 00:05:44.119 { 00:05:44.119 "subsystem": "bdev", 00:05:44.119 "config": [ 00:05:44.119 { 00:05:44.119 "method": "bdev_set_options", 00:05:44.119 "params": { 00:05:44.119 "bdev_io_pool_size": 65535, 00:05:44.119 "bdev_io_cache_size": 256, 00:05:44.119 "bdev_auto_examine": true, 00:05:44.119 "iobuf_small_cache_size": 128, 00:05:44.119 "iobuf_large_cache_size": 16 00:05:44.119 } 00:05:44.119 }, 00:05:44.119 { 00:05:44.119 "method": "bdev_raid_set_options", 00:05:44.119 "params": { 00:05:44.119 "process_window_size_kb": 1024 00:05:44.119 } 00:05:44.119 }, 00:05:44.119 { 00:05:44.119 "method": "bdev_iscsi_set_options", 00:05:44.119 "params": { 00:05:44.119 "timeout_sec": 30 00:05:44.119 } 00:05:44.119 }, 00:05:44.119 { 00:05:44.119 "method": "bdev_nvme_set_options", 00:05:44.119 "params": { 00:05:44.119 "action_on_timeout": "none", 00:05:44.119 "timeout_us": 0, 00:05:44.119 "timeout_admin_us": 0, 00:05:44.119 "keep_alive_timeout_ms": 10000, 00:05:44.119 "arbitration_burst": 0, 00:05:44.119 "low_priority_weight": 0, 00:05:44.119 "medium_priority_weight": 0, 00:05:44.119 "high_priority_weight": 0, 00:05:44.119 "nvme_adminq_poll_period_us": 10000, 00:05:44.119 "nvme_ioq_poll_period_us": 0, 00:05:44.119 "io_queue_requests": 0, 00:05:44.119 "delay_cmd_submit": true, 00:05:44.119 "transport_retry_count": 4, 00:05:44.119 "bdev_retry_count": 3, 00:05:44.119 "transport_ack_timeout": 0, 00:05:44.119 "ctrlr_loss_timeout_sec": 0, 00:05:44.119 "reconnect_delay_sec": 0, 00:05:44.119 "fast_io_fail_timeout_sec": 0, 00:05:44.119 "disable_auto_failback": false, 00:05:44.119 "generate_uuids": false, 00:05:44.119 "transport_tos": 0, 00:05:44.119 "nvme_error_stat": false, 00:05:44.119 "rdma_srq_size": 0, 00:05:44.119 "io_path_stat": false, 00:05:44.119 "allow_accel_sequence": false, 00:05:44.119 "rdma_max_cq_size": 0, 00:05:44.119 "rdma_cm_event_timeout_ms": 0, 00:05:44.119 "dhchap_digests": [ 00:05:44.119 "sha256", 00:05:44.119 "sha384", 00:05:44.119 "sha512" 00:05:44.119 ], 00:05:44.119 "dhchap_dhgroups": [ 00:05:44.119 "null", 00:05:44.120 "ffdhe2048", 00:05:44.120 "ffdhe3072", 00:05:44.120 "ffdhe4096", 00:05:44.120 "ffdhe6144", 00:05:44.120 "ffdhe8192" 00:05:44.120 ] 00:05:44.120 } 00:05:44.120 }, 00:05:44.120 { 00:05:44.120 "method": "bdev_nvme_set_hotplug", 00:05:44.120 "params": { 00:05:44.120 "period_us": 100000, 00:05:44.120 "enable": false 00:05:44.120 } 00:05:44.120 }, 00:05:44.120 { 00:05:44.120 "method": "bdev_wait_for_examine" 00:05:44.120 } 00:05:44.120 ] 00:05:44.120 }, 00:05:44.120 { 00:05:44.120 "subsystem": "scsi", 00:05:44.120 "config": null 00:05:44.120 }, 00:05:44.120 { 00:05:44.120 "subsystem": "scheduler", 00:05:44.120 "config": [ 00:05:44.120 { 00:05:44.120 "method": "framework_set_scheduler", 00:05:44.120 "params": { 00:05:44.120 "name": "static" 00:05:44.120 } 00:05:44.120 } 00:05:44.120 ] 00:05:44.120 }, 00:05:44.120 { 00:05:44.120 "subsystem": "vhost_scsi", 00:05:44.120 "config": [] 00:05:44.120 }, 00:05:44.120 { 00:05:44.120 "subsystem": "vhost_blk", 00:05:44.120 "config": [] 00:05:44.120 }, 00:05:44.120 { 00:05:44.120 "subsystem": "ublk", 00:05:44.120 "config": [] 00:05:44.120 }, 00:05:44.120 { 00:05:44.120 "subsystem": "nbd", 00:05:44.120 "config": [] 00:05:44.120 }, 00:05:44.120 { 00:05:44.120 "subsystem": "nvmf", 00:05:44.120 "config": [ 00:05:44.120 { 00:05:44.120 "method": "nvmf_set_config", 00:05:44.120 "params": { 00:05:44.120 "discovery_filter": "match_any", 00:05:44.120 "admin_cmd_passthru": { 00:05:44.120 "identify_ctrlr": false 00:05:44.120 } 00:05:44.120 } 00:05:44.120 }, 00:05:44.120 { 00:05:44.120 "method": "nvmf_set_max_subsystems", 00:05:44.120 "params": { 00:05:44.120 "max_subsystems": 1024 00:05:44.120 } 00:05:44.120 }, 00:05:44.120 { 00:05:44.120 "method": "nvmf_set_crdt", 00:05:44.120 "params": { 00:05:44.120 "crdt1": 0, 00:05:44.120 "crdt2": 0, 00:05:44.120 "crdt3": 0 00:05:44.120 } 00:05:44.120 }, 00:05:44.120 { 00:05:44.120 "method": "nvmf_create_transport", 00:05:44.120 "params": { 00:05:44.120 "trtype": "TCP", 00:05:44.120 "max_queue_depth": 128, 00:05:44.120 "max_io_qpairs_per_ctrlr": 127, 00:05:44.120 "in_capsule_data_size": 4096, 00:05:44.120 "max_io_size": 131072, 00:05:44.120 "io_unit_size": 131072, 00:05:44.120 "max_aq_depth": 128, 00:05:44.120 "num_shared_buffers": 511, 00:05:44.120 "buf_cache_size": 4294967295, 00:05:44.120 "dif_insert_or_strip": false, 00:05:44.120 "zcopy": false, 00:05:44.120 "c2h_success": true, 00:05:44.120 "sock_priority": 0, 00:05:44.120 "abort_timeout_sec": 1, 00:05:44.120 "ack_timeout": 0, 00:05:44.120 "data_wr_pool_size": 0 00:05:44.120 } 00:05:44.120 } 00:05:44.120 ] 00:05:44.120 }, 00:05:44.120 { 00:05:44.120 "subsystem": "iscsi", 00:05:44.120 "config": [ 00:05:44.120 { 00:05:44.120 "method": "iscsi_set_options", 00:05:44.120 "params": { 00:05:44.120 "node_base": "iqn.2016-06.io.spdk", 00:05:44.120 "max_sessions": 128, 00:05:44.120 "max_connections_per_session": 2, 00:05:44.120 "max_queue_depth": 64, 00:05:44.120 "default_time2wait": 2, 00:05:44.120 "default_time2retain": 20, 00:05:44.120 "first_burst_length": 8192, 00:05:44.120 "immediate_data": true, 00:05:44.120 "allow_duplicated_isid": false, 00:05:44.120 "error_recovery_level": 0, 00:05:44.120 "nop_timeout": 60, 00:05:44.120 "nop_in_interval": 30, 00:05:44.120 "disable_chap": false, 00:05:44.120 "require_chap": false, 00:05:44.120 "mutual_chap": false, 00:05:44.120 "chap_group": 0, 00:05:44.120 "max_large_datain_per_connection": 64, 00:05:44.120 "max_r2t_per_connection": 4, 00:05:44.120 "pdu_pool_size": 36864, 00:05:44.120 "immediate_data_pool_size": 16384, 00:05:44.120 "data_out_pool_size": 2048 00:05:44.120 } 00:05:44.120 } 00:05:44.120 ] 00:05:44.120 } 00:05:44.120 ] 00:05:44.120 } 00:05:44.120 13:44:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:44.120 13:44:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3631849 00:05:44.120 13:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3631849 ']' 00:05:44.120 13:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3631849 00:05:44.120 13:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:44.120 13:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:44.120 13:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3631849 00:05:44.120 13:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:44.120 13:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:44.120 13:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3631849' 00:05:44.120 killing process with pid 3631849 00:05:44.120 13:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3631849 00:05:44.120 13:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3631849 00:05:44.683 13:44:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3631989 00:05:44.683 13:44:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:44.683 13:44:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:49.984 13:44:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3631989 00:05:49.984 13:44:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3631989 ']' 00:05:49.984 13:44:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3631989 00:05:49.984 13:44:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:49.984 13:44:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.984 13:44:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3631989 00:05:49.984 13:44:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.984 13:44:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.985 13:44:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3631989' 00:05:49.985 killing process with pid 3631989 00:05:49.985 13:44:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3631989 00:05:49.985 13:44:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3631989 00:05:49.985 13:44:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:49.985 13:44:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:49.985 00:05:49.985 real 0m6.518s 00:05:49.985 user 0m6.124s 00:05:49.985 sys 0m0.668s 00:05:49.985 13:44:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.985 13:44:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:49.985 ************************************ 00:05:49.985 END TEST skip_rpc_with_json 00:05:49.985 ************************************ 00:05:49.985 13:44:44 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:49.985 13:44:44 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:49.985 13:44:44 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.985 13:44:44 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.985 13:44:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.243 ************************************ 00:05:50.243 START TEST skip_rpc_with_delay 00:05:50.243 ************************************ 00:05:50.243 13:44:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:50.243 13:44:44 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:50.243 13:44:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:50.243 13:44:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:50.243 13:44:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.243 13:44:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.243 13:44:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.243 13:44:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.243 13:44:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.243 13:44:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.243 13:44:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.243 13:44:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:50.243 13:44:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:50.243 [2024-07-15 13:44:44.898360] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:50.243 [2024-07-15 13:44:44.898481] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:50.243 13:44:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:50.243 13:44:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:50.243 13:44:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:50.243 13:44:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:50.243 00:05:50.243 real 0m0.068s 00:05:50.243 user 0m0.043s 00:05:50.243 sys 0m0.025s 00:05:50.243 13:44:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.243 13:44:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:50.243 ************************************ 00:05:50.243 END TEST skip_rpc_with_delay 00:05:50.243 ************************************ 00:05:50.243 13:44:44 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:50.243 13:44:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:50.243 13:44:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:50.243 13:44:44 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:50.243 13:44:44 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.243 13:44:44 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.243 13:44:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.243 ************************************ 00:05:50.243 START TEST exit_on_failed_rpc_init 00:05:50.243 ************************************ 00:05:50.243 13:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:50.243 13:44:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3632707 00:05:50.243 13:44:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.243 13:44:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3632707 00:05:50.243 13:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 3632707 ']' 00:05:50.243 13:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.243 13:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.243 13:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.243 13:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.243 13:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:50.243 [2024-07-15 13:44:45.015699] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:05:50.243 [2024-07-15 13:44:45.015805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3632707 ] 00:05:50.243 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.243 [2024-07-15 13:44:45.075911] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.501 [2024-07-15 13:44:45.187520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.759 13:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.759 13:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:50.759 13:44:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.759 13:44:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:50.759 13:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:50.759 13:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:50.759 13:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.759 13:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.759 13:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.759 13:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.759 13:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.759 13:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.759 13:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.759 13:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:50.759 13:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:50.759 [2024-07-15 13:44:45.486210] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:05:50.759 [2024-07-15 13:44:45.486286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3632712 ] 00:05:50.759 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.759 [2024-07-15 13:44:45.542615] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.017 [2024-07-15 13:44:45.655105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.017 [2024-07-15 13:44:45.655224] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:51.017 [2024-07-15 13:44:45.655246] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:51.017 [2024-07-15 13:44:45.655258] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:51.017 13:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:51.017 13:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:51.017 13:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:51.017 13:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:51.017 13:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:51.017 13:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:51.017 13:44:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:51.017 13:44:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3632707 00:05:51.017 13:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 3632707 ']' 00:05:51.017 13:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 3632707 00:05:51.017 13:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:51.017 13:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.017 13:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3632707 00:05:51.017 13:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.017 13:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.017 13:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3632707' 00:05:51.017 killing process with pid 3632707 00:05:51.017 13:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 3632707 00:05:51.017 13:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 3632707 00:05:51.583 00:05:51.583 real 0m1.282s 00:05:51.583 user 0m1.441s 00:05:51.583 sys 0m0.441s 00:05:51.583 13:44:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.583 13:44:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:51.583 ************************************ 00:05:51.583 END TEST exit_on_failed_rpc_init 00:05:51.583 ************************************ 00:05:51.583 13:44:46 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:51.583 13:44:46 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:51.583 00:05:51.583 real 0m13.582s 00:05:51.583 user 0m12.866s 00:05:51.583 sys 0m1.611s 00:05:51.583 13:44:46 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.583 13:44:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.583 ************************************ 00:05:51.583 END TEST skip_rpc 00:05:51.583 ************************************ 00:05:51.583 13:44:46 -- common/autotest_common.sh@1142 -- # return 0 00:05:51.583 13:44:46 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:51.583 13:44:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.583 13:44:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.583 13:44:46 -- common/autotest_common.sh@10 -- # set +x 00:05:51.583 ************************************ 00:05:51.583 START TEST rpc_client 00:05:51.583 ************************************ 00:05:51.583 13:44:46 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:51.583 * Looking for test storage... 00:05:51.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:51.583 13:44:46 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:51.583 OK 00:05:51.583 13:44:46 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:51.583 00:05:51.583 real 0m0.071s 00:05:51.583 user 0m0.033s 00:05:51.583 sys 0m0.043s 00:05:51.583 13:44:46 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.583 13:44:46 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:51.583 ************************************ 00:05:51.583 END TEST rpc_client 00:05:51.583 ************************************ 00:05:51.583 13:44:46 -- common/autotest_common.sh@1142 -- # return 0 00:05:51.583 13:44:46 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:51.583 13:44:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.583 13:44:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.583 13:44:46 -- common/autotest_common.sh@10 -- # set +x 00:05:51.842 ************************************ 00:05:51.842 START TEST json_config 00:05:51.842 ************************************ 00:05:51.842 13:44:46 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:51.842 13:44:46 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:51.842 13:44:46 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:51.842 13:44:46 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.842 13:44:46 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.842 13:44:46 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.842 13:44:46 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.842 13:44:46 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.842 13:44:46 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.842 13:44:46 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.842 13:44:46 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.842 13:44:46 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.842 13:44:46 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.842 13:44:46 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:05:51.842 13:44:46 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:05:51.842 13:44:46 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.842 13:44:46 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.842 13:44:46 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:51.842 13:44:46 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:51.842 13:44:46 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:51.842 13:44:46 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.842 13:44:46 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.842 13:44:46 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.842 13:44:46 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.843 13:44:46 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.843 13:44:46 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.843 13:44:46 json_config -- paths/export.sh@5 -- # export PATH 00:05:51.843 13:44:46 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.843 13:44:46 json_config -- nvmf/common.sh@47 -- # : 0 00:05:51.843 13:44:46 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:51.843 13:44:46 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:51.843 13:44:46 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:51.843 13:44:46 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.843 13:44:46 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.843 13:44:46 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:51.843 13:44:46 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:51.843 13:44:46 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:51.843 13:44:46 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:51.843 13:44:46 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:51.843 13:44:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:51.843 13:44:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:51.843 13:44:46 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:51.843 13:44:46 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:51.843 13:44:46 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:51.843 13:44:46 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:51.843 13:44:46 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:51.843 13:44:46 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:51.843 13:44:46 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:51.843 13:44:46 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:51.843 13:44:46 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:51.843 13:44:46 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:51.843 13:44:46 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:51.843 13:44:46 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:51.843 INFO: JSON configuration test init 00:05:51.843 13:44:46 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:51.843 13:44:46 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:51.843 13:44:46 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:51.843 13:44:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.843 13:44:46 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:51.843 13:44:46 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:51.843 13:44:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.843 13:44:46 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:51.843 13:44:46 json_config -- json_config/common.sh@9 -- # local app=target 00:05:51.843 13:44:46 json_config -- json_config/common.sh@10 -- # shift 00:05:51.843 13:44:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:51.843 13:44:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:51.843 13:44:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:51.843 13:44:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:51.843 13:44:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:51.843 13:44:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3632956 00:05:51.843 13:44:46 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:51.843 13:44:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:51.843 Waiting for target to run... 00:05:51.843 13:44:46 json_config -- json_config/common.sh@25 -- # waitforlisten 3632956 /var/tmp/spdk_tgt.sock 00:05:51.843 13:44:46 json_config -- common/autotest_common.sh@829 -- # '[' -z 3632956 ']' 00:05:51.843 13:44:46 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:51.843 13:44:46 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.843 13:44:46 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:51.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:51.843 13:44:46 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.843 13:44:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.843 [2024-07-15 13:44:46.557821] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:05:51.843 [2024-07-15 13:44:46.557931] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3632956 ] 00:05:51.843 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.102 [2024-07-15 13:44:46.901886] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.361 [2024-07-15 13:44:46.986075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.926 13:44:47 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.926 13:44:47 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:52.926 13:44:47 json_config -- json_config/common.sh@26 -- # echo '' 00:05:52.926 00:05:52.926 13:44:47 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:52.926 13:44:47 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:52.926 13:44:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:52.926 13:44:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.926 13:44:47 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:52.926 13:44:47 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:52.926 13:44:47 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:52.926 13:44:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.926 13:44:47 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:52.926 13:44:47 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:52.926 13:44:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:56.206 13:44:50 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:56.206 13:44:50 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:56.206 13:44:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:56.206 13:44:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.206 13:44:50 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:56.206 13:44:50 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:56.206 13:44:50 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:56.206 13:44:50 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:56.206 13:44:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:56.206 13:44:50 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:56.206 13:44:50 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:56.206 13:44:50 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:56.206 13:44:50 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:56.206 13:44:50 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:56.206 13:44:50 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:56.206 13:44:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.206 13:44:50 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:56.206 13:44:50 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:56.206 13:44:50 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:56.206 13:44:50 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:56.206 13:44:50 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:56.206 13:44:50 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:56.206 13:44:50 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:56.206 13:44:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:56.206 13:44:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.206 13:44:50 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:56.206 13:44:50 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:56.206 13:44:50 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:56.206 13:44:50 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:56.206 13:44:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:56.468 MallocForNvmf0 00:05:56.468 13:44:51 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:56.468 13:44:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:56.768 MallocForNvmf1 00:05:56.768 13:44:51 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:56.768 13:44:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:57.040 [2024-07-15 13:44:51.644231] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:57.040 13:44:51 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:57.040 13:44:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:57.297 13:44:51 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:57.297 13:44:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:57.555 13:44:52 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:57.555 13:44:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:57.813 13:44:52 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:57.813 13:44:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:57.813 [2024-07-15 13:44:52.623382] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:57.813 13:44:52 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:57.813 13:44:52 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:57.813 13:44:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.071 13:44:52 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:58.071 13:44:52 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:58.071 13:44:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.071 13:44:52 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:58.071 13:44:52 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:58.072 13:44:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:58.072 MallocBdevForConfigChangeCheck 00:05:58.329 13:44:52 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:58.329 13:44:52 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:58.329 13:44:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.329 13:44:52 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:58.329 13:44:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:58.586 13:44:53 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:58.586 INFO: shutting down applications... 00:05:58.586 13:44:53 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:58.586 13:44:53 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:58.586 13:44:53 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:58.586 13:44:53 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:00.486 Calling clear_iscsi_subsystem 00:06:00.486 Calling clear_nvmf_subsystem 00:06:00.486 Calling clear_nbd_subsystem 00:06:00.486 Calling clear_ublk_subsystem 00:06:00.486 Calling clear_vhost_blk_subsystem 00:06:00.486 Calling clear_vhost_scsi_subsystem 00:06:00.486 Calling clear_bdev_subsystem 00:06:00.486 13:44:54 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:00.486 13:44:54 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:00.486 13:44:54 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:00.486 13:44:54 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:00.486 13:44:54 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:00.486 13:44:54 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:00.486 13:44:55 json_config -- json_config/json_config.sh@345 -- # break 00:06:00.486 13:44:55 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:00.486 13:44:55 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:00.486 13:44:55 json_config -- json_config/common.sh@31 -- # local app=target 00:06:00.486 13:44:55 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:00.486 13:44:55 json_config -- json_config/common.sh@35 -- # [[ -n 3632956 ]] 00:06:00.486 13:44:55 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3632956 00:06:00.486 13:44:55 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:00.486 13:44:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:00.486 13:44:55 json_config -- json_config/common.sh@41 -- # kill -0 3632956 00:06:00.486 13:44:55 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:01.052 13:44:55 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:01.052 13:44:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:01.052 13:44:55 json_config -- json_config/common.sh@41 -- # kill -0 3632956 00:06:01.052 13:44:55 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:01.052 13:44:55 json_config -- json_config/common.sh@43 -- # break 00:06:01.052 13:44:55 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:01.052 13:44:55 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:01.052 SPDK target shutdown done 00:06:01.052 13:44:55 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:01.052 INFO: relaunching applications... 00:06:01.052 13:44:55 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:01.052 13:44:55 json_config -- json_config/common.sh@9 -- # local app=target 00:06:01.052 13:44:55 json_config -- json_config/common.sh@10 -- # shift 00:06:01.052 13:44:55 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:01.052 13:44:55 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:01.052 13:44:55 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:01.052 13:44:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:01.052 13:44:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:01.052 13:44:55 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3634267 00:06:01.052 13:44:55 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:01.052 13:44:55 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:01.053 Waiting for target to run... 00:06:01.053 13:44:55 json_config -- json_config/common.sh@25 -- # waitforlisten 3634267 /var/tmp/spdk_tgt.sock 00:06:01.053 13:44:55 json_config -- common/autotest_common.sh@829 -- # '[' -z 3634267 ']' 00:06:01.053 13:44:55 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:01.053 13:44:55 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.053 13:44:55 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:01.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:01.053 13:44:55 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.053 13:44:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.053 [2024-07-15 13:44:55.866805] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:06:01.053 [2024-07-15 13:44:55.866893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3634267 ] 00:06:01.311 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.567 [2024-07-15 13:44:56.405854] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.823 [2024-07-15 13:44:56.499320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.114 [2024-07-15 13:44:59.533662] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:05.114 [2024-07-15 13:44:59.566111] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:05.679 13:45:00 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.679 13:45:00 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:05.679 13:45:00 json_config -- json_config/common.sh@26 -- # echo '' 00:06:05.679 00:06:05.679 13:45:00 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:05.679 13:45:00 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:05.679 INFO: Checking if target configuration is the same... 00:06:05.679 13:45:00 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:05.679 13:45:00 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:05.679 13:45:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:05.679 + '[' 2 -ne 2 ']' 00:06:05.679 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:05.679 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:05.679 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:05.679 +++ basename /dev/fd/62 00:06:05.679 ++ mktemp /tmp/62.XXX 00:06:05.679 + tmp_file_1=/tmp/62.V3a 00:06:05.679 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:05.679 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:05.679 + tmp_file_2=/tmp/spdk_tgt_config.json.wHR 00:06:05.679 + ret=0 00:06:05.679 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:05.937 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:05.937 + diff -u /tmp/62.V3a /tmp/spdk_tgt_config.json.wHR 00:06:05.937 + echo 'INFO: JSON config files are the same' 00:06:05.937 INFO: JSON config files are the same 00:06:05.937 + rm /tmp/62.V3a /tmp/spdk_tgt_config.json.wHR 00:06:05.937 + exit 0 00:06:05.937 13:45:00 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:05.937 13:45:00 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:05.937 INFO: changing configuration and checking if this can be detected... 00:06:05.937 13:45:00 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:05.937 13:45:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:06.195 13:45:00 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:06.195 13:45:00 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:06.195 13:45:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:06.195 + '[' 2 -ne 2 ']' 00:06:06.195 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:06.195 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:06.195 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:06.195 +++ basename /dev/fd/62 00:06:06.195 ++ mktemp /tmp/62.XXX 00:06:06.195 + tmp_file_1=/tmp/62.63n 00:06:06.195 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:06.195 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:06.195 + tmp_file_2=/tmp/spdk_tgt_config.json.qId 00:06:06.195 + ret=0 00:06:06.195 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:06.760 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:06.760 + diff -u /tmp/62.63n /tmp/spdk_tgt_config.json.qId 00:06:06.760 + ret=1 00:06:06.760 + echo '=== Start of file: /tmp/62.63n ===' 00:06:06.760 + cat /tmp/62.63n 00:06:06.760 + echo '=== End of file: /tmp/62.63n ===' 00:06:06.760 + echo '' 00:06:06.760 + echo '=== Start of file: /tmp/spdk_tgt_config.json.qId ===' 00:06:06.760 + cat /tmp/spdk_tgt_config.json.qId 00:06:06.760 + echo '=== End of file: /tmp/spdk_tgt_config.json.qId ===' 00:06:06.760 + echo '' 00:06:06.760 + rm /tmp/62.63n /tmp/spdk_tgt_config.json.qId 00:06:06.760 + exit 1 00:06:06.760 13:45:01 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:06.760 INFO: configuration change detected. 00:06:06.760 13:45:01 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:06.760 13:45:01 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:06.760 13:45:01 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:06.760 13:45:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.760 13:45:01 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:06.760 13:45:01 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:06.760 13:45:01 json_config -- json_config/json_config.sh@317 -- # [[ -n 3634267 ]] 00:06:06.760 13:45:01 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:06.760 13:45:01 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:06.760 13:45:01 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:06.760 13:45:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.760 13:45:01 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:06.760 13:45:01 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:06.760 13:45:01 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:06.760 13:45:01 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:06.760 13:45:01 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:06.760 13:45:01 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:06.760 13:45:01 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:06.760 13:45:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.760 13:45:01 json_config -- json_config/json_config.sh@323 -- # killprocess 3634267 00:06:06.760 13:45:01 json_config -- common/autotest_common.sh@948 -- # '[' -z 3634267 ']' 00:06:06.760 13:45:01 json_config -- common/autotest_common.sh@952 -- # kill -0 3634267 00:06:06.760 13:45:01 json_config -- common/autotest_common.sh@953 -- # uname 00:06:06.760 13:45:01 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:06.760 13:45:01 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3634267 00:06:06.760 13:45:01 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:06.760 13:45:01 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:06.760 13:45:01 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3634267' 00:06:06.760 killing process with pid 3634267 00:06:06.760 13:45:01 json_config -- common/autotest_common.sh@967 -- # kill 3634267 00:06:06.760 13:45:01 json_config -- common/autotest_common.sh@972 -- # wait 3634267 00:06:08.655 13:45:03 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:08.655 13:45:03 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:08.655 13:45:03 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:08.655 13:45:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.655 13:45:03 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:08.655 13:45:03 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:08.655 INFO: Success 00:06:08.655 00:06:08.655 real 0m16.681s 00:06:08.655 user 0m18.576s 00:06:08.655 sys 0m2.047s 00:06:08.655 13:45:03 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.655 13:45:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.655 ************************************ 00:06:08.655 END TEST json_config 00:06:08.655 ************************************ 00:06:08.655 13:45:03 -- common/autotest_common.sh@1142 -- # return 0 00:06:08.655 13:45:03 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:08.655 13:45:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.655 13:45:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.655 13:45:03 -- common/autotest_common.sh@10 -- # set +x 00:06:08.655 ************************************ 00:06:08.655 START TEST json_config_extra_key 00:06:08.655 ************************************ 00:06:08.655 13:45:03 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:08.655 13:45:03 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:08.655 13:45:03 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:08.655 13:45:03 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:08.655 13:45:03 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:08.655 13:45:03 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:08.655 13:45:03 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:08.655 13:45:03 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:08.655 13:45:03 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:08.655 13:45:03 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:08.655 13:45:03 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:08.656 13:45:03 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:08.656 13:45:03 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:08.656 13:45:03 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:08.656 13:45:03 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:08.656 13:45:03 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:08.656 13:45:03 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:08.656 13:45:03 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:08.656 13:45:03 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:08.656 13:45:03 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:08.656 13:45:03 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.656 13:45:03 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.656 13:45:03 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.656 13:45:03 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.656 13:45:03 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.656 13:45:03 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.656 13:45:03 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:08.656 13:45:03 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.656 13:45:03 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:08.656 13:45:03 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:08.656 13:45:03 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:08.656 13:45:03 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:08.656 13:45:03 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:08.656 13:45:03 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:08.656 13:45:03 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:08.656 13:45:03 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:08.656 13:45:03 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:08.656 13:45:03 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:08.656 13:45:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:08.656 13:45:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:08.656 13:45:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:08.656 13:45:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:08.656 13:45:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:08.656 13:45:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:08.656 13:45:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:08.656 13:45:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:08.656 13:45:03 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:08.656 13:45:03 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:08.656 INFO: launching applications... 00:06:08.656 13:45:03 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:08.656 13:45:03 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:08.656 13:45:03 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:08.656 13:45:03 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:08.656 13:45:03 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:08.656 13:45:03 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:08.656 13:45:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.656 13:45:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.656 13:45:03 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3635306 00:06:08.656 13:45:03 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:08.656 13:45:03 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:08.656 Waiting for target to run... 00:06:08.656 13:45:03 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3635306 /var/tmp/spdk_tgt.sock 00:06:08.656 13:45:03 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 3635306 ']' 00:06:08.656 13:45:03 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:08.656 13:45:03 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.656 13:45:03 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:08.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:08.656 13:45:03 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.656 13:45:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:08.656 [2024-07-15 13:45:03.279765] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:06:08.656 [2024-07-15 13:45:03.279853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3635306 ] 00:06:08.656 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.913 [2024-07-15 13:45:03.616152] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.913 [2024-07-15 13:45:03.694628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.477 13:45:04 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.477 13:45:04 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:09.477 13:45:04 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:09.477 00:06:09.477 13:45:04 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:09.477 INFO: shutting down applications... 00:06:09.477 13:45:04 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:09.477 13:45:04 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:09.477 13:45:04 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:09.477 13:45:04 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3635306 ]] 00:06:09.477 13:45:04 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3635306 00:06:09.477 13:45:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:09.477 13:45:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:09.477 13:45:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3635306 00:06:09.477 13:45:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:10.043 13:45:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:10.043 13:45:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.043 13:45:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3635306 00:06:10.043 13:45:04 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:10.043 13:45:04 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:10.043 13:45:04 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:10.043 13:45:04 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:10.043 SPDK target shutdown done 00:06:10.043 13:45:04 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:10.043 Success 00:06:10.043 00:06:10.043 real 0m1.548s 00:06:10.043 user 0m1.540s 00:06:10.043 sys 0m0.421s 00:06:10.043 13:45:04 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.043 13:45:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:10.043 ************************************ 00:06:10.043 END TEST json_config_extra_key 00:06:10.043 ************************************ 00:06:10.043 13:45:04 -- common/autotest_common.sh@1142 -- # return 0 00:06:10.043 13:45:04 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:10.043 13:45:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.043 13:45:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.043 13:45:04 -- common/autotest_common.sh@10 -- # set +x 00:06:10.043 ************************************ 00:06:10.043 START TEST alias_rpc 00:06:10.043 ************************************ 00:06:10.043 13:45:04 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:10.043 * Looking for test storage... 00:06:10.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:10.043 13:45:04 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:10.043 13:45:04 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3635609 00:06:10.043 13:45:04 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.043 13:45:04 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3635609 00:06:10.043 13:45:04 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 3635609 ']' 00:06:10.043 13:45:04 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.043 13:45:04 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.043 13:45:04 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.043 13:45:04 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.043 13:45:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.043 [2024-07-15 13:45:04.876828] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:06:10.043 [2024-07-15 13:45:04.876909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3635609 ] 00:06:10.301 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.301 [2024-07-15 13:45:04.936905] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.301 [2024-07-15 13:45:05.040399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.559 13:45:05 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.559 13:45:05 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:10.559 13:45:05 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:10.817 13:45:05 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3635609 00:06:10.817 13:45:05 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 3635609 ']' 00:06:10.817 13:45:05 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 3635609 00:06:10.817 13:45:05 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:10.817 13:45:05 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.817 13:45:05 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3635609 00:06:10.817 13:45:05 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:10.817 13:45:05 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:10.817 13:45:05 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3635609' 00:06:10.817 killing process with pid 3635609 00:06:10.817 13:45:05 alias_rpc -- common/autotest_common.sh@967 -- # kill 3635609 00:06:10.817 13:45:05 alias_rpc -- common/autotest_common.sh@972 -- # wait 3635609 00:06:11.384 00:06:11.384 real 0m1.250s 00:06:11.384 user 0m1.340s 00:06:11.384 sys 0m0.411s 00:06:11.384 13:45:06 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.384 13:45:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.384 ************************************ 00:06:11.384 END TEST alias_rpc 00:06:11.384 ************************************ 00:06:11.384 13:45:06 -- common/autotest_common.sh@1142 -- # return 0 00:06:11.384 13:45:06 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:11.384 13:45:06 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:11.384 13:45:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.384 13:45:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.384 13:45:06 -- common/autotest_common.sh@10 -- # set +x 00:06:11.384 ************************************ 00:06:11.384 START TEST spdkcli_tcp 00:06:11.384 ************************************ 00:06:11.384 13:45:06 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:11.384 * Looking for test storage... 00:06:11.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:11.384 13:45:06 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:11.384 13:45:06 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:11.384 13:45:06 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:11.384 13:45:06 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:11.384 13:45:06 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:11.384 13:45:06 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:11.384 13:45:06 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:11.384 13:45:06 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:11.384 13:45:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:11.384 13:45:06 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3635801 00:06:11.384 13:45:06 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:11.384 13:45:06 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3635801 00:06:11.384 13:45:06 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 3635801 ']' 00:06:11.384 13:45:06 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.384 13:45:06 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.384 13:45:06 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.384 13:45:06 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.384 13:45:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:11.384 [2024-07-15 13:45:06.180552] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:06:11.384 [2024-07-15 13:45:06.180630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3635801 ] 00:06:11.384 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.642 [2024-07-15 13:45:06.240213] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.642 [2024-07-15 13:45:06.346980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.642 [2024-07-15 13:45:06.346984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.899 13:45:06 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.899 13:45:06 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:11.899 13:45:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3635807 00:06:11.899 13:45:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:11.899 13:45:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:12.156 [ 00:06:12.156 "bdev_malloc_delete", 00:06:12.157 "bdev_malloc_create", 00:06:12.157 "bdev_null_resize", 00:06:12.157 "bdev_null_delete", 00:06:12.157 "bdev_null_create", 00:06:12.157 "bdev_nvme_cuse_unregister", 00:06:12.157 "bdev_nvme_cuse_register", 00:06:12.157 "bdev_opal_new_user", 00:06:12.157 "bdev_opal_set_lock_state", 00:06:12.157 "bdev_opal_delete", 00:06:12.157 "bdev_opal_get_info", 00:06:12.157 "bdev_opal_create", 00:06:12.157 "bdev_nvme_opal_revert", 00:06:12.157 "bdev_nvme_opal_init", 00:06:12.157 "bdev_nvme_send_cmd", 00:06:12.157 "bdev_nvme_get_path_iostat", 00:06:12.157 "bdev_nvme_get_mdns_discovery_info", 00:06:12.157 "bdev_nvme_stop_mdns_discovery", 00:06:12.157 "bdev_nvme_start_mdns_discovery", 00:06:12.157 "bdev_nvme_set_multipath_policy", 00:06:12.157 "bdev_nvme_set_preferred_path", 00:06:12.157 "bdev_nvme_get_io_paths", 00:06:12.157 "bdev_nvme_remove_error_injection", 00:06:12.157 "bdev_nvme_add_error_injection", 00:06:12.157 "bdev_nvme_get_discovery_info", 00:06:12.157 "bdev_nvme_stop_discovery", 00:06:12.157 "bdev_nvme_start_discovery", 00:06:12.157 "bdev_nvme_get_controller_health_info", 00:06:12.157 "bdev_nvme_disable_controller", 00:06:12.157 "bdev_nvme_enable_controller", 00:06:12.157 "bdev_nvme_reset_controller", 00:06:12.157 "bdev_nvme_get_transport_statistics", 00:06:12.157 "bdev_nvme_apply_firmware", 00:06:12.157 "bdev_nvme_detach_controller", 00:06:12.157 "bdev_nvme_get_controllers", 00:06:12.157 "bdev_nvme_attach_controller", 00:06:12.157 "bdev_nvme_set_hotplug", 00:06:12.157 "bdev_nvme_set_options", 00:06:12.157 "bdev_passthru_delete", 00:06:12.157 "bdev_passthru_create", 00:06:12.157 "bdev_lvol_set_parent_bdev", 00:06:12.157 "bdev_lvol_set_parent", 00:06:12.157 "bdev_lvol_check_shallow_copy", 00:06:12.157 "bdev_lvol_start_shallow_copy", 00:06:12.157 "bdev_lvol_grow_lvstore", 00:06:12.157 "bdev_lvol_get_lvols", 00:06:12.157 "bdev_lvol_get_lvstores", 00:06:12.157 "bdev_lvol_delete", 00:06:12.157 "bdev_lvol_set_read_only", 00:06:12.157 "bdev_lvol_resize", 00:06:12.157 "bdev_lvol_decouple_parent", 00:06:12.157 "bdev_lvol_inflate", 00:06:12.157 "bdev_lvol_rename", 00:06:12.157 "bdev_lvol_clone_bdev", 00:06:12.157 "bdev_lvol_clone", 00:06:12.157 "bdev_lvol_snapshot", 00:06:12.157 "bdev_lvol_create", 00:06:12.157 "bdev_lvol_delete_lvstore", 00:06:12.157 "bdev_lvol_rename_lvstore", 00:06:12.157 "bdev_lvol_create_lvstore", 00:06:12.157 "bdev_raid_set_options", 00:06:12.157 "bdev_raid_remove_base_bdev", 00:06:12.157 "bdev_raid_add_base_bdev", 00:06:12.157 "bdev_raid_delete", 00:06:12.157 "bdev_raid_create", 00:06:12.157 "bdev_raid_get_bdevs", 00:06:12.157 "bdev_error_inject_error", 00:06:12.157 "bdev_error_delete", 00:06:12.157 "bdev_error_create", 00:06:12.157 "bdev_split_delete", 00:06:12.157 "bdev_split_create", 00:06:12.157 "bdev_delay_delete", 00:06:12.157 "bdev_delay_create", 00:06:12.157 "bdev_delay_update_latency", 00:06:12.157 "bdev_zone_block_delete", 00:06:12.157 "bdev_zone_block_create", 00:06:12.157 "blobfs_create", 00:06:12.157 "blobfs_detect", 00:06:12.157 "blobfs_set_cache_size", 00:06:12.157 "bdev_aio_delete", 00:06:12.157 "bdev_aio_rescan", 00:06:12.157 "bdev_aio_create", 00:06:12.157 "bdev_ftl_set_property", 00:06:12.157 "bdev_ftl_get_properties", 00:06:12.157 "bdev_ftl_get_stats", 00:06:12.157 "bdev_ftl_unmap", 00:06:12.157 "bdev_ftl_unload", 00:06:12.157 "bdev_ftl_delete", 00:06:12.157 "bdev_ftl_load", 00:06:12.157 "bdev_ftl_create", 00:06:12.157 "bdev_virtio_attach_controller", 00:06:12.157 "bdev_virtio_scsi_get_devices", 00:06:12.157 "bdev_virtio_detach_controller", 00:06:12.157 "bdev_virtio_blk_set_hotplug", 00:06:12.157 "bdev_iscsi_delete", 00:06:12.157 "bdev_iscsi_create", 00:06:12.157 "bdev_iscsi_set_options", 00:06:12.157 "accel_error_inject_error", 00:06:12.157 "ioat_scan_accel_module", 00:06:12.157 "dsa_scan_accel_module", 00:06:12.157 "iaa_scan_accel_module", 00:06:12.157 "vfu_virtio_create_scsi_endpoint", 00:06:12.157 "vfu_virtio_scsi_remove_target", 00:06:12.157 "vfu_virtio_scsi_add_target", 00:06:12.157 "vfu_virtio_create_blk_endpoint", 00:06:12.157 "vfu_virtio_delete_endpoint", 00:06:12.157 "keyring_file_remove_key", 00:06:12.157 "keyring_file_add_key", 00:06:12.157 "keyring_linux_set_options", 00:06:12.157 "iscsi_get_histogram", 00:06:12.157 "iscsi_enable_histogram", 00:06:12.157 "iscsi_set_options", 00:06:12.157 "iscsi_get_auth_groups", 00:06:12.157 "iscsi_auth_group_remove_secret", 00:06:12.157 "iscsi_auth_group_add_secret", 00:06:12.157 "iscsi_delete_auth_group", 00:06:12.157 "iscsi_create_auth_group", 00:06:12.157 "iscsi_set_discovery_auth", 00:06:12.157 "iscsi_get_options", 00:06:12.157 "iscsi_target_node_request_logout", 00:06:12.157 "iscsi_target_node_set_redirect", 00:06:12.157 "iscsi_target_node_set_auth", 00:06:12.157 "iscsi_target_node_add_lun", 00:06:12.157 "iscsi_get_stats", 00:06:12.157 "iscsi_get_connections", 00:06:12.157 "iscsi_portal_group_set_auth", 00:06:12.157 "iscsi_start_portal_group", 00:06:12.157 "iscsi_delete_portal_group", 00:06:12.157 "iscsi_create_portal_group", 00:06:12.157 "iscsi_get_portal_groups", 00:06:12.157 "iscsi_delete_target_node", 00:06:12.157 "iscsi_target_node_remove_pg_ig_maps", 00:06:12.157 "iscsi_target_node_add_pg_ig_maps", 00:06:12.157 "iscsi_create_target_node", 00:06:12.157 "iscsi_get_target_nodes", 00:06:12.157 "iscsi_delete_initiator_group", 00:06:12.157 "iscsi_initiator_group_remove_initiators", 00:06:12.157 "iscsi_initiator_group_add_initiators", 00:06:12.157 "iscsi_create_initiator_group", 00:06:12.157 "iscsi_get_initiator_groups", 00:06:12.157 "nvmf_set_crdt", 00:06:12.157 "nvmf_set_config", 00:06:12.157 "nvmf_set_max_subsystems", 00:06:12.157 "nvmf_stop_mdns_prr", 00:06:12.157 "nvmf_publish_mdns_prr", 00:06:12.157 "nvmf_subsystem_get_listeners", 00:06:12.157 "nvmf_subsystem_get_qpairs", 00:06:12.157 "nvmf_subsystem_get_controllers", 00:06:12.157 "nvmf_get_stats", 00:06:12.157 "nvmf_get_transports", 00:06:12.157 "nvmf_create_transport", 00:06:12.157 "nvmf_get_targets", 00:06:12.157 "nvmf_delete_target", 00:06:12.157 "nvmf_create_target", 00:06:12.157 "nvmf_subsystem_allow_any_host", 00:06:12.157 "nvmf_subsystem_remove_host", 00:06:12.157 "nvmf_subsystem_add_host", 00:06:12.157 "nvmf_ns_remove_host", 00:06:12.157 "nvmf_ns_add_host", 00:06:12.157 "nvmf_subsystem_remove_ns", 00:06:12.157 "nvmf_subsystem_add_ns", 00:06:12.157 "nvmf_subsystem_listener_set_ana_state", 00:06:12.157 "nvmf_discovery_get_referrals", 00:06:12.157 "nvmf_discovery_remove_referral", 00:06:12.157 "nvmf_discovery_add_referral", 00:06:12.157 "nvmf_subsystem_remove_listener", 00:06:12.157 "nvmf_subsystem_add_listener", 00:06:12.157 "nvmf_delete_subsystem", 00:06:12.157 "nvmf_create_subsystem", 00:06:12.157 "nvmf_get_subsystems", 00:06:12.157 "env_dpdk_get_mem_stats", 00:06:12.157 "nbd_get_disks", 00:06:12.157 "nbd_stop_disk", 00:06:12.157 "nbd_start_disk", 00:06:12.157 "ublk_recover_disk", 00:06:12.157 "ublk_get_disks", 00:06:12.157 "ublk_stop_disk", 00:06:12.157 "ublk_start_disk", 00:06:12.157 "ublk_destroy_target", 00:06:12.157 "ublk_create_target", 00:06:12.157 "virtio_blk_create_transport", 00:06:12.157 "virtio_blk_get_transports", 00:06:12.157 "vhost_controller_set_coalescing", 00:06:12.157 "vhost_get_controllers", 00:06:12.157 "vhost_delete_controller", 00:06:12.157 "vhost_create_blk_controller", 00:06:12.157 "vhost_scsi_controller_remove_target", 00:06:12.157 "vhost_scsi_controller_add_target", 00:06:12.157 "vhost_start_scsi_controller", 00:06:12.157 "vhost_create_scsi_controller", 00:06:12.157 "thread_set_cpumask", 00:06:12.157 "framework_get_governor", 00:06:12.157 "framework_get_scheduler", 00:06:12.157 "framework_set_scheduler", 00:06:12.157 "framework_get_reactors", 00:06:12.157 "thread_get_io_channels", 00:06:12.157 "thread_get_pollers", 00:06:12.157 "thread_get_stats", 00:06:12.157 "framework_monitor_context_switch", 00:06:12.157 "spdk_kill_instance", 00:06:12.157 "log_enable_timestamps", 00:06:12.157 "log_get_flags", 00:06:12.157 "log_clear_flag", 00:06:12.157 "log_set_flag", 00:06:12.157 "log_get_level", 00:06:12.157 "log_set_level", 00:06:12.157 "log_get_print_level", 00:06:12.157 "log_set_print_level", 00:06:12.157 "framework_enable_cpumask_locks", 00:06:12.157 "framework_disable_cpumask_locks", 00:06:12.158 "framework_wait_init", 00:06:12.158 "framework_start_init", 00:06:12.158 "scsi_get_devices", 00:06:12.158 "bdev_get_histogram", 00:06:12.158 "bdev_enable_histogram", 00:06:12.158 "bdev_set_qos_limit", 00:06:12.158 "bdev_set_qd_sampling_period", 00:06:12.158 "bdev_get_bdevs", 00:06:12.158 "bdev_reset_iostat", 00:06:12.158 "bdev_get_iostat", 00:06:12.158 "bdev_examine", 00:06:12.158 "bdev_wait_for_examine", 00:06:12.158 "bdev_set_options", 00:06:12.158 "notify_get_notifications", 00:06:12.158 "notify_get_types", 00:06:12.158 "accel_get_stats", 00:06:12.158 "accel_set_options", 00:06:12.158 "accel_set_driver", 00:06:12.158 "accel_crypto_key_destroy", 00:06:12.158 "accel_crypto_keys_get", 00:06:12.158 "accel_crypto_key_create", 00:06:12.158 "accel_assign_opc", 00:06:12.158 "accel_get_module_info", 00:06:12.158 "accel_get_opc_assignments", 00:06:12.158 "vmd_rescan", 00:06:12.158 "vmd_remove_device", 00:06:12.158 "vmd_enable", 00:06:12.158 "sock_get_default_impl", 00:06:12.158 "sock_set_default_impl", 00:06:12.158 "sock_impl_set_options", 00:06:12.158 "sock_impl_get_options", 00:06:12.158 "iobuf_get_stats", 00:06:12.158 "iobuf_set_options", 00:06:12.158 "keyring_get_keys", 00:06:12.158 "framework_get_pci_devices", 00:06:12.158 "framework_get_config", 00:06:12.158 "framework_get_subsystems", 00:06:12.158 "vfu_tgt_set_base_path", 00:06:12.158 "trace_get_info", 00:06:12.158 "trace_get_tpoint_group_mask", 00:06:12.158 "trace_disable_tpoint_group", 00:06:12.158 "trace_enable_tpoint_group", 00:06:12.158 "trace_clear_tpoint_mask", 00:06:12.158 "trace_set_tpoint_mask", 00:06:12.158 "spdk_get_version", 00:06:12.158 "rpc_get_methods" 00:06:12.158 ] 00:06:12.158 13:45:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:12.158 13:45:06 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:12.158 13:45:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:12.158 13:45:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:12.158 13:45:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3635801 00:06:12.158 13:45:06 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 3635801 ']' 00:06:12.158 13:45:06 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 3635801 00:06:12.158 13:45:06 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:12.158 13:45:06 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.158 13:45:06 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3635801 00:06:12.158 13:45:06 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.158 13:45:06 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.158 13:45:06 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3635801' 00:06:12.158 killing process with pid 3635801 00:06:12.158 13:45:06 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 3635801 00:06:12.158 13:45:06 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 3635801 00:06:12.739 00:06:12.739 real 0m1.272s 00:06:12.739 user 0m2.231s 00:06:12.739 sys 0m0.434s 00:06:12.739 13:45:07 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.739 13:45:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:12.739 ************************************ 00:06:12.739 END TEST spdkcli_tcp 00:06:12.739 ************************************ 00:06:12.739 13:45:07 -- common/autotest_common.sh@1142 -- # return 0 00:06:12.739 13:45:07 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:12.739 13:45:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.739 13:45:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.739 13:45:07 -- common/autotest_common.sh@10 -- # set +x 00:06:12.739 ************************************ 00:06:12.739 START TEST dpdk_mem_utility 00:06:12.739 ************************************ 00:06:12.739 13:45:07 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:12.739 * Looking for test storage... 00:06:12.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:12.739 13:45:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:12.739 13:45:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3636156 00:06:12.739 13:45:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:12.739 13:45:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3636156 00:06:12.739 13:45:07 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 3636156 ']' 00:06:12.739 13:45:07 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.739 13:45:07 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.739 13:45:07 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.739 13:45:07 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.739 13:45:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:12.739 [2024-07-15 13:45:07.491451] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:06:12.739 [2024-07-15 13:45:07.491555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3636156 ] 00:06:12.739 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.739 [2024-07-15 13:45:07.548826] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.996 [2024-07-15 13:45:07.658410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.254 13:45:07 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.254 13:45:07 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:13.254 13:45:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:13.254 13:45:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:13.254 13:45:07 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.254 13:45:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:13.254 { 00:06:13.254 "filename": "/tmp/spdk_mem_dump.txt" 00:06:13.254 } 00:06:13.254 13:45:07 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.254 13:45:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:13.254 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:13.254 1 heaps totaling size 814.000000 MiB 00:06:13.254 size: 814.000000 MiB heap id: 0 00:06:13.254 end heaps---------- 00:06:13.254 8 mempools totaling size 598.116089 MiB 00:06:13.254 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:13.254 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:13.254 size: 84.521057 MiB name: bdev_io_3636156 00:06:13.254 size: 51.011292 MiB name: evtpool_3636156 00:06:13.254 size: 50.003479 MiB name: msgpool_3636156 00:06:13.255 size: 21.763794 MiB name: PDU_Pool 00:06:13.255 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:13.255 size: 0.026123 MiB name: Session_Pool 00:06:13.255 end mempools------- 00:06:13.255 6 memzones totaling size 4.142822 MiB 00:06:13.255 size: 1.000366 MiB name: RG_ring_0_3636156 00:06:13.255 size: 1.000366 MiB name: RG_ring_1_3636156 00:06:13.255 size: 1.000366 MiB name: RG_ring_4_3636156 00:06:13.255 size: 1.000366 MiB name: RG_ring_5_3636156 00:06:13.255 size: 0.125366 MiB name: RG_ring_2_3636156 00:06:13.255 size: 0.015991 MiB name: RG_ring_3_3636156 00:06:13.255 end memzones------- 00:06:13.255 13:45:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:13.255 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:13.255 list of free elements. size: 12.519348 MiB 00:06:13.255 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:13.255 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:13.255 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:13.255 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:13.255 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:13.255 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:13.255 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:13.255 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:13.255 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:13.255 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:13.255 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:13.255 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:13.255 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:13.255 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:13.255 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:13.255 list of standard malloc elements. size: 199.218079 MiB 00:06:13.255 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:13.255 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:13.255 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:13.255 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:13.255 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:13.255 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:13.255 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:13.255 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:13.255 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:13.255 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:13.255 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:13.255 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:13.255 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:13.255 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:13.255 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:13.255 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:13.255 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:13.255 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:13.255 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:13.255 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:13.255 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:13.255 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:13.255 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:13.255 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:13.255 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:13.255 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:13.255 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:13.255 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:13.255 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:13.255 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:13.255 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:13.255 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:13.255 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:13.255 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:13.255 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:13.255 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:13.255 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:13.255 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:13.255 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:13.255 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:13.255 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:13.255 list of memzone associated elements. size: 602.262573 MiB 00:06:13.255 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:13.255 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:13.255 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:13.255 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:13.255 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:13.255 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3636156_0 00:06:13.255 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:13.255 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3636156_0 00:06:13.255 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:13.255 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3636156_0 00:06:13.255 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:13.255 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:13.255 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:13.255 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:13.255 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:13.255 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3636156 00:06:13.255 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:13.255 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3636156 00:06:13.255 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:13.255 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3636156 00:06:13.255 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:13.255 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:13.255 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:13.255 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:13.255 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:13.255 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:13.255 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:13.255 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:13.255 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:13.255 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3636156 00:06:13.255 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:13.255 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3636156 00:06:13.255 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:13.255 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3636156 00:06:13.255 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:13.255 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3636156 00:06:13.255 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:13.255 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3636156 00:06:13.255 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:13.255 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:13.255 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:13.255 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:13.255 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:13.255 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:13.255 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:13.255 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3636156 00:06:13.255 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:13.255 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:13.255 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:13.255 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:13.255 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:13.255 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3636156 00:06:13.255 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:13.255 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:13.255 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:13.255 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3636156 00:06:13.255 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:13.255 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3636156 00:06:13.255 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:13.255 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:13.255 13:45:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:13.255 13:45:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3636156 00:06:13.255 13:45:08 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 3636156 ']' 00:06:13.255 13:45:08 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 3636156 00:06:13.255 13:45:08 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:13.255 13:45:08 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:13.255 13:45:08 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3636156 00:06:13.255 13:45:08 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:13.255 13:45:08 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:13.255 13:45:08 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3636156' 00:06:13.255 killing process with pid 3636156 00:06:13.255 13:45:08 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 3636156 00:06:13.255 13:45:08 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 3636156 00:06:13.821 00:06:13.821 real 0m1.076s 00:06:13.821 user 0m1.069s 00:06:13.821 sys 0m0.369s 00:06:13.821 13:45:08 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.821 13:45:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:13.821 ************************************ 00:06:13.821 END TEST dpdk_mem_utility 00:06:13.821 ************************************ 00:06:13.821 13:45:08 -- common/autotest_common.sh@1142 -- # return 0 00:06:13.821 13:45:08 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:13.821 13:45:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.821 13:45:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.821 13:45:08 -- common/autotest_common.sh@10 -- # set +x 00:06:13.821 ************************************ 00:06:13.821 START TEST event 00:06:13.821 ************************************ 00:06:13.821 13:45:08 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:13.821 * Looking for test storage... 00:06:13.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:13.821 13:45:08 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:13.821 13:45:08 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:13.821 13:45:08 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:13.821 13:45:08 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:13.821 13:45:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.821 13:45:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:13.821 ************************************ 00:06:13.821 START TEST event_perf 00:06:13.821 ************************************ 00:06:13.821 13:45:08 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:13.821 Running I/O for 1 seconds...[2024-07-15 13:45:08.606551] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:06:13.821 [2024-07-15 13:45:08.606615] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3636576 ] 00:06:13.821 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.079 [2024-07-15 13:45:08.665524] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:14.079 [2024-07-15 13:45:08.774324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.079 [2024-07-15 13:45:08.774388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.079 [2024-07-15 13:45:08.774453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.079 [2024-07-15 13:45:08.774456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.446 Running I/O for 1 seconds... 00:06:15.446 lcore 0: 233820 00:06:15.446 lcore 1: 233818 00:06:15.446 lcore 2: 233819 00:06:15.446 lcore 3: 233820 00:06:15.446 done. 00:06:15.446 00:06:15.446 real 0m1.292s 00:06:15.446 user 0m4.209s 00:06:15.446 sys 0m0.077s 00:06:15.446 13:45:09 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.446 13:45:09 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:15.446 ************************************ 00:06:15.446 END TEST event_perf 00:06:15.446 ************************************ 00:06:15.446 13:45:09 event -- common/autotest_common.sh@1142 -- # return 0 00:06:15.446 13:45:09 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:15.446 13:45:09 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:15.446 13:45:09 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.446 13:45:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.446 ************************************ 00:06:15.446 START TEST event_reactor 00:06:15.446 ************************************ 00:06:15.446 13:45:09 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:15.446 [2024-07-15 13:45:09.953343] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:06:15.446 [2024-07-15 13:45:09.953410] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3636852 ] 00:06:15.446 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.446 [2024-07-15 13:45:10.016283] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.446 [2024-07-15 13:45:10.139389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.815 test_start 00:06:16.815 oneshot 00:06:16.815 tick 100 00:06:16.815 tick 100 00:06:16.815 tick 250 00:06:16.815 tick 100 00:06:16.815 tick 100 00:06:16.815 tick 100 00:06:16.815 tick 250 00:06:16.815 tick 500 00:06:16.815 tick 100 00:06:16.815 tick 100 00:06:16.815 tick 250 00:06:16.815 tick 100 00:06:16.815 tick 100 00:06:16.815 test_end 00:06:16.815 00:06:16.815 real 0m1.307s 00:06:16.815 user 0m1.220s 00:06:16.815 sys 0m0.082s 00:06:16.815 13:45:11 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.815 13:45:11 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:16.815 ************************************ 00:06:16.815 END TEST event_reactor 00:06:16.815 ************************************ 00:06:16.815 13:45:11 event -- common/autotest_common.sh@1142 -- # return 0 00:06:16.815 13:45:11 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:16.815 13:45:11 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:16.815 13:45:11 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.815 13:45:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.815 ************************************ 00:06:16.815 START TEST event_reactor_perf 00:06:16.815 ************************************ 00:06:16.815 13:45:11 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:16.815 [2024-07-15 13:45:11.307602] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:06:16.815 [2024-07-15 13:45:11.307668] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3637019 ] 00:06:16.815 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.815 [2024-07-15 13:45:11.366663] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.815 [2024-07-15 13:45:11.473397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.745 test_start 00:06:17.745 test_end 00:06:17.745 Performance: 449653 events per second 00:06:18.002 00:06:18.002 real 0m1.293s 00:06:18.002 user 0m1.213s 00:06:18.002 sys 0m0.075s 00:06:18.002 13:45:12 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.002 13:45:12 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:18.002 ************************************ 00:06:18.002 END TEST event_reactor_perf 00:06:18.002 ************************************ 00:06:18.002 13:45:12 event -- common/autotest_common.sh@1142 -- # return 0 00:06:18.002 13:45:12 event -- event/event.sh@49 -- # uname -s 00:06:18.002 13:45:12 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:18.002 13:45:12 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:18.002 13:45:12 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.002 13:45:12 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.002 13:45:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.002 ************************************ 00:06:18.002 START TEST event_scheduler 00:06:18.002 ************************************ 00:06:18.002 13:45:12 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:18.002 * Looking for test storage... 00:06:18.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:18.002 13:45:12 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:18.002 13:45:12 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3637315 00:06:18.002 13:45:12 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:18.002 13:45:12 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:18.002 13:45:12 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3637315 00:06:18.002 13:45:12 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 3637315 ']' 00:06:18.002 13:45:12 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.002 13:45:12 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.002 13:45:12 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.002 13:45:12 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.002 13:45:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.002 [2024-07-15 13:45:12.730979] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:06:18.002 [2024-07-15 13:45:12.731066] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3637315 ] 00:06:18.002 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.002 [2024-07-15 13:45:12.788483] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:18.260 [2024-07-15 13:45:12.898650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.260 [2024-07-15 13:45:12.898713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.260 [2024-07-15 13:45:12.898780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:18.260 [2024-07-15 13:45:12.898784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.260 13:45:12 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.260 13:45:12 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:18.260 13:45:12 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:18.260 13:45:12 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.260 13:45:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.260 [2024-07-15 13:45:12.935506] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:18.260 [2024-07-15 13:45:12.935532] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:18.260 [2024-07-15 13:45:12.935549] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:18.260 [2024-07-15 13:45:12.935560] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:18.260 [2024-07-15 13:45:12.935570] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:18.260 13:45:12 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.260 13:45:12 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:18.260 13:45:12 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.260 13:45:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.260 [2024-07-15 13:45:13.032941] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:18.260 13:45:13 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.260 13:45:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:18.260 13:45:13 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.260 13:45:13 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.260 13:45:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.260 ************************************ 00:06:18.260 START TEST scheduler_create_thread 00:06:18.260 ************************************ 00:06:18.260 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:18.260 13:45:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:18.260 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.260 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.260 2 00:06:18.260 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.260 13:45:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:18.260 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.260 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.260 3 00:06:18.260 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.260 13:45:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:18.260 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.260 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.260 4 00:06:18.260 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.260 13:45:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:18.260 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.260 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.260 5 00:06:18.260 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.260 13:45:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:18.260 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.260 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.518 6 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.518 7 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.518 8 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.518 9 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.518 10 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.518 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.084 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.084 00:06:19.084 real 0m0.589s 00:06:19.084 user 0m0.013s 00:06:19.084 sys 0m0.001s 00:06:19.084 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.084 13:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.084 ************************************ 00:06:19.084 END TEST scheduler_create_thread 00:06:19.084 ************************************ 00:06:19.084 13:45:13 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:19.084 13:45:13 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:19.084 13:45:13 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3637315 00:06:19.084 13:45:13 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 3637315 ']' 00:06:19.084 13:45:13 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 3637315 00:06:19.084 13:45:13 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:19.084 13:45:13 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:19.084 13:45:13 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3637315 00:06:19.084 13:45:13 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:19.084 13:45:13 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:19.084 13:45:13 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3637315' 00:06:19.084 killing process with pid 3637315 00:06:19.084 13:45:13 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 3637315 00:06:19.084 13:45:13 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 3637315 00:06:19.341 [2024-07-15 13:45:14.133047] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:19.600 00:06:19.600 real 0m1.757s 00:06:19.600 user 0m2.192s 00:06:19.600 sys 0m0.316s 00:06:19.600 13:45:14 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.600 13:45:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:19.600 ************************************ 00:06:19.600 END TEST event_scheduler 00:06:19.600 ************************************ 00:06:19.600 13:45:14 event -- common/autotest_common.sh@1142 -- # return 0 00:06:19.600 13:45:14 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:19.600 13:45:14 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:19.600 13:45:14 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.600 13:45:14 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.600 13:45:14 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.857 ************************************ 00:06:19.857 START TEST app_repeat 00:06:19.857 ************************************ 00:06:19.857 13:45:14 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:19.857 13:45:14 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.857 13:45:14 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.857 13:45:14 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:19.857 13:45:14 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.857 13:45:14 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:19.857 13:45:14 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:19.857 13:45:14 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:19.857 13:45:14 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3637507 00:06:19.857 13:45:14 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.857 13:45:14 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:19.857 13:45:14 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3637507' 00:06:19.857 Process app_repeat pid: 3637507 00:06:19.857 13:45:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:19.857 13:45:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:19.857 spdk_app_start Round 0 00:06:19.857 13:45:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3637507 /var/tmp/spdk-nbd.sock 00:06:19.857 13:45:14 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3637507 ']' 00:06:19.857 13:45:14 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:19.857 13:45:14 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.857 13:45:14 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:19.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:19.857 13:45:14 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.857 13:45:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:19.857 [2024-07-15 13:45:14.475025] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:06:19.857 [2024-07-15 13:45:14.475090] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3637507 ] 00:06:19.857 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.857 [2024-07-15 13:45:14.530744] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.857 [2024-07-15 13:45:14.632905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.857 [2024-07-15 13:45:14.632910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.115 13:45:14 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.115 13:45:14 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:20.115 13:45:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.372 Malloc0 00:06:20.372 13:45:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.629 Malloc1 00:06:20.629 13:45:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.629 13:45:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.629 13:45:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.629 13:45:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:20.629 13:45:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.629 13:45:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:20.629 13:45:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.629 13:45:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.629 13:45:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.629 13:45:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:20.629 13:45:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.629 13:45:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:20.629 13:45:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:20.630 13:45:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:20.630 13:45:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.630 13:45:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:20.886 /dev/nbd0 00:06:20.887 13:45:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:20.887 13:45:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:20.887 13:45:15 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:20.887 13:45:15 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:20.887 13:45:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:20.887 13:45:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:20.887 13:45:15 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:20.887 13:45:15 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:20.887 13:45:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:20.887 13:45:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:20.887 13:45:15 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.887 1+0 records in 00:06:20.887 1+0 records out 00:06:20.887 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000169163 s, 24.2 MB/s 00:06:20.887 13:45:15 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:20.887 13:45:15 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:20.887 13:45:15 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:20.887 13:45:15 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:20.887 13:45:15 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:20.887 13:45:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.887 13:45:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.887 13:45:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:21.143 /dev/nbd1 00:06:21.143 13:45:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:21.143 13:45:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:21.143 13:45:15 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:21.143 13:45:15 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:21.143 13:45:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:21.143 13:45:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:21.143 13:45:15 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:21.143 13:45:15 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:21.143 13:45:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:21.143 13:45:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:21.143 13:45:15 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.143 1+0 records in 00:06:21.143 1+0 records out 00:06:21.143 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220313 s, 18.6 MB/s 00:06:21.143 13:45:15 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:21.144 13:45:15 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:21.144 13:45:15 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:21.144 13:45:15 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:21.144 13:45:15 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:21.144 13:45:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.144 13:45:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.144 13:45:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.144 13:45:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.144 13:45:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:21.404 { 00:06:21.404 "nbd_device": "/dev/nbd0", 00:06:21.404 "bdev_name": "Malloc0" 00:06:21.404 }, 00:06:21.404 { 00:06:21.404 "nbd_device": "/dev/nbd1", 00:06:21.404 "bdev_name": "Malloc1" 00:06:21.404 } 00:06:21.404 ]' 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:21.404 { 00:06:21.404 "nbd_device": "/dev/nbd0", 00:06:21.404 "bdev_name": "Malloc0" 00:06:21.404 }, 00:06:21.404 { 00:06:21.404 "nbd_device": "/dev/nbd1", 00:06:21.404 "bdev_name": "Malloc1" 00:06:21.404 } 00:06:21.404 ]' 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:21.404 /dev/nbd1' 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:21.404 /dev/nbd1' 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:21.404 256+0 records in 00:06:21.404 256+0 records out 00:06:21.404 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00498831 s, 210 MB/s 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:21.404 256+0 records in 00:06:21.404 256+0 records out 00:06:21.404 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213619 s, 49.1 MB/s 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:21.404 256+0 records in 00:06:21.404 256+0 records out 00:06:21.404 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230447 s, 45.5 MB/s 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.404 13:45:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:21.688 13:45:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:21.688 13:45:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:21.688 13:45:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:21.688 13:45:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.688 13:45:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.688 13:45:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:21.688 13:45:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.688 13:45:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.688 13:45:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.688 13:45:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:21.969 13:45:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:21.969 13:45:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:21.969 13:45:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:21.969 13:45:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.969 13:45:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.969 13:45:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:21.969 13:45:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.969 13:45:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.969 13:45:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.969 13:45:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.969 13:45:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.226 13:45:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:22.226 13:45:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.226 13:45:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:22.226 13:45:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:22.226 13:45:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:22.226 13:45:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.226 13:45:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:22.226 13:45:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:22.226 13:45:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:22.226 13:45:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:22.226 13:45:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:22.226 13:45:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:22.226 13:45:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:22.483 13:45:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:22.740 [2024-07-15 13:45:17.536237] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.998 [2024-07-15 13:45:17.639774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.998 [2024-07-15 13:45:17.639775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.998 [2024-07-15 13:45:17.697188] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:22.998 [2024-07-15 13:45:17.697261] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:25.519 13:45:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:25.519 13:45:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:25.519 spdk_app_start Round 1 00:06:25.519 13:45:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3637507 /var/tmp/spdk-nbd.sock 00:06:25.519 13:45:20 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3637507 ']' 00:06:25.519 13:45:20 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.519 13:45:20 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.519 13:45:20 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.519 13:45:20 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.519 13:45:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.776 13:45:20 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.776 13:45:20 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:25.776 13:45:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.033 Malloc0 00:06:26.033 13:45:20 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.290 Malloc1 00:06:26.290 13:45:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.290 13:45:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.290 13:45:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.290 13:45:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:26.290 13:45:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.290 13:45:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:26.290 13:45:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.290 13:45:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.290 13:45:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.290 13:45:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:26.290 13:45:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.290 13:45:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:26.290 13:45:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:26.290 13:45:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:26.290 13:45:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.290 13:45:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:26.547 /dev/nbd0 00:06:26.547 13:45:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.547 13:45:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.547 13:45:21 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:26.547 13:45:21 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:26.547 13:45:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:26.547 13:45:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:26.547 13:45:21 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:26.547 13:45:21 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:26.547 13:45:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:26.547 13:45:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:26.547 13:45:21 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.547 1+0 records in 00:06:26.547 1+0 records out 00:06:26.547 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223939 s, 18.3 MB/s 00:06:26.547 13:45:21 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.547 13:45:21 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:26.547 13:45:21 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.547 13:45:21 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:26.547 13:45:21 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:26.547 13:45:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.547 13:45:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.547 13:45:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:26.805 /dev/nbd1 00:06:26.805 13:45:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:26.805 13:45:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:26.805 13:45:21 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:26.805 13:45:21 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:26.805 13:45:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:26.805 13:45:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:26.805 13:45:21 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:26.805 13:45:21 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:26.805 13:45:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:26.805 13:45:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:26.805 13:45:21 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.805 1+0 records in 00:06:26.805 1+0 records out 00:06:26.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188664 s, 21.7 MB/s 00:06:26.805 13:45:21 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.805 13:45:21 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:26.805 13:45:21 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.805 13:45:21 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:26.805 13:45:21 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:26.805 13:45:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.805 13:45:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.805 13:45:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.805 13:45:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.805 13:45:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.062 13:45:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:27.062 { 00:06:27.062 "nbd_device": "/dev/nbd0", 00:06:27.062 "bdev_name": "Malloc0" 00:06:27.062 }, 00:06:27.062 { 00:06:27.062 "nbd_device": "/dev/nbd1", 00:06:27.062 "bdev_name": "Malloc1" 00:06:27.062 } 00:06:27.062 ]' 00:06:27.062 13:45:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:27.062 { 00:06:27.062 "nbd_device": "/dev/nbd0", 00:06:27.062 "bdev_name": "Malloc0" 00:06:27.062 }, 00:06:27.062 { 00:06:27.062 "nbd_device": "/dev/nbd1", 00:06:27.062 "bdev_name": "Malloc1" 00:06:27.062 } 00:06:27.062 ]' 00:06:27.062 13:45:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.062 13:45:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:27.062 /dev/nbd1' 00:06:27.062 13:45:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:27.062 /dev/nbd1' 00:06:27.062 13:45:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.062 13:45:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:27.062 13:45:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:27.062 13:45:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:27.062 13:45:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:27.062 13:45:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:27.062 13:45:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.062 13:45:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.062 13:45:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:27.062 13:45:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.062 13:45:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:27.062 13:45:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:27.319 256+0 records in 00:06:27.319 256+0 records out 00:06:27.319 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00516192 s, 203 MB/s 00:06:27.319 13:45:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.319 13:45:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:27.319 256+0 records in 00:06:27.319 256+0 records out 00:06:27.319 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232927 s, 45.0 MB/s 00:06:27.319 13:45:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.319 13:45:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:27.319 256+0 records in 00:06:27.319 256+0 records out 00:06:27.319 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236738 s, 44.3 MB/s 00:06:27.319 13:45:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:27.319 13:45:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.319 13:45:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.319 13:45:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:27.319 13:45:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.319 13:45:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:27.319 13:45:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:27.319 13:45:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.319 13:45:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:27.319 13:45:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.319 13:45:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:27.319 13:45:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.319 13:45:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:27.319 13:45:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.319 13:45:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.319 13:45:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:27.319 13:45:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:27.319 13:45:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.319 13:45:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:27.576 13:45:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:27.576 13:45:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:27.576 13:45:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:27.576 13:45:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.576 13:45:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.576 13:45:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:27.576 13:45:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.576 13:45:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.576 13:45:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.576 13:45:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:27.833 13:45:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:27.833 13:45:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:27.833 13:45:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:27.833 13:45:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.833 13:45:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.833 13:45:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:27.833 13:45:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.833 13:45:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.833 13:45:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.833 13:45:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.833 13:45:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.089 13:45:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:28.089 13:45:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:28.089 13:45:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.089 13:45:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:28.089 13:45:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:28.089 13:45:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.089 13:45:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:28.089 13:45:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:28.089 13:45:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:28.089 13:45:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:28.089 13:45:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:28.089 13:45:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:28.089 13:45:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:28.345 13:45:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:28.603 [2024-07-15 13:45:23.332930] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.603 [2024-07-15 13:45:23.434872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.603 [2024-07-15 13:45:23.434877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.859 [2024-07-15 13:45:23.488347] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:28.859 [2024-07-15 13:45:23.488419] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:31.384 13:45:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:31.384 13:45:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:31.384 spdk_app_start Round 2 00:06:31.384 13:45:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3637507 /var/tmp/spdk-nbd.sock 00:06:31.384 13:45:26 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3637507 ']' 00:06:31.384 13:45:26 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.384 13:45:26 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.384 13:45:26 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.384 13:45:26 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.384 13:45:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.643 13:45:26 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.643 13:45:26 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:31.643 13:45:26 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.901 Malloc0 00:06:31.901 13:45:26 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.159 Malloc1 00:06:32.159 13:45:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.159 13:45:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.159 13:45:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.159 13:45:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:32.159 13:45:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.159 13:45:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:32.159 13:45:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.159 13:45:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.159 13:45:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.159 13:45:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:32.159 13:45:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.159 13:45:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:32.159 13:45:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:32.159 13:45:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:32.159 13:45:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.159 13:45:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:32.416 /dev/nbd0 00:06:32.416 13:45:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:32.416 13:45:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:32.416 13:45:27 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:32.416 13:45:27 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:32.416 13:45:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:32.416 13:45:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:32.416 13:45:27 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:32.416 13:45:27 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:32.416 13:45:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:32.416 13:45:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:32.417 13:45:27 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.417 1+0 records in 00:06:32.417 1+0 records out 00:06:32.417 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191606 s, 21.4 MB/s 00:06:32.417 13:45:27 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.417 13:45:27 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:32.417 13:45:27 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.417 13:45:27 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:32.417 13:45:27 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:32.417 13:45:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.417 13:45:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.417 13:45:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:32.674 /dev/nbd1 00:06:32.674 13:45:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:32.674 13:45:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:32.674 13:45:27 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:32.674 13:45:27 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:32.674 13:45:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:32.674 13:45:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:32.674 13:45:27 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:32.674 13:45:27 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:32.674 13:45:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:32.674 13:45:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:32.674 13:45:27 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.674 1+0 records in 00:06:32.674 1+0 records out 00:06:32.674 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181252 s, 22.6 MB/s 00:06:32.674 13:45:27 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.674 13:45:27 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:32.674 13:45:27 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.674 13:45:27 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:32.674 13:45:27 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:32.674 13:45:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.674 13:45:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.674 13:45:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.674 13:45:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.674 13:45:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:32.932 { 00:06:32.932 "nbd_device": "/dev/nbd0", 00:06:32.932 "bdev_name": "Malloc0" 00:06:32.932 }, 00:06:32.932 { 00:06:32.932 "nbd_device": "/dev/nbd1", 00:06:32.932 "bdev_name": "Malloc1" 00:06:32.932 } 00:06:32.932 ]' 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:32.932 { 00:06:32.932 "nbd_device": "/dev/nbd0", 00:06:32.932 "bdev_name": "Malloc0" 00:06:32.932 }, 00:06:32.932 { 00:06:32.932 "nbd_device": "/dev/nbd1", 00:06:32.932 "bdev_name": "Malloc1" 00:06:32.932 } 00:06:32.932 ]' 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:32.932 /dev/nbd1' 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:32.932 /dev/nbd1' 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:32.932 256+0 records in 00:06:32.932 256+0 records out 00:06:32.932 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00378224 s, 277 MB/s 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:32.932 256+0 records in 00:06:32.932 256+0 records out 00:06:32.932 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212799 s, 49.3 MB/s 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:32.932 256+0 records in 00:06:32.932 256+0 records out 00:06:32.932 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253299 s, 41.4 MB/s 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.932 13:45:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:33.191 13:45:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.191 13:45:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.191 13:45:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.191 13:45:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.191 13:45:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.191 13:45:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.191 13:45:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.191 13:45:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.191 13:45:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.191 13:45:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:33.757 13:45:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:33.757 13:45:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:33.757 13:45:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:33.757 13:45:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.758 13:45:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.758 13:45:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:33.758 13:45:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.758 13:45:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.758 13:45:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.758 13:45:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.758 13:45:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.758 13:45:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:33.758 13:45:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:33.758 13:45:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.758 13:45:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:33.758 13:45:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:33.758 13:45:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.758 13:45:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:34.016 13:45:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:34.016 13:45:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:34.016 13:45:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:34.016 13:45:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:34.016 13:45:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:34.016 13:45:28 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:34.276 13:45:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:34.276 [2024-07-15 13:45:29.113721] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.533 [2024-07-15 13:45:29.216282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.533 [2024-07-15 13:45:29.216282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.533 [2024-07-15 13:45:29.274460] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:34.533 [2024-07-15 13:45:29.274532] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:37.068 13:45:31 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3637507 /var/tmp/spdk-nbd.sock 00:06:37.068 13:45:31 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3637507 ']' 00:06:37.068 13:45:31 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:37.068 13:45:31 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.068 13:45:31 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:37.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:37.068 13:45:31 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.068 13:45:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.327 13:45:32 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.327 13:45:32 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:37.327 13:45:32 event.app_repeat -- event/event.sh@39 -- # killprocess 3637507 00:06:37.327 13:45:32 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 3637507 ']' 00:06:37.327 13:45:32 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 3637507 00:06:37.327 13:45:32 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:37.327 13:45:32 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:37.327 13:45:32 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3637507 00:06:37.327 13:45:32 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:37.327 13:45:32 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:37.327 13:45:32 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3637507' 00:06:37.327 killing process with pid 3637507 00:06:37.327 13:45:32 event.app_repeat -- common/autotest_common.sh@967 -- # kill 3637507 00:06:37.327 13:45:32 event.app_repeat -- common/autotest_common.sh@972 -- # wait 3637507 00:06:37.585 spdk_app_start is called in Round 0. 00:06:37.585 Shutdown signal received, stop current app iteration 00:06:37.585 Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 reinitialization... 00:06:37.585 spdk_app_start is called in Round 1. 00:06:37.585 Shutdown signal received, stop current app iteration 00:06:37.585 Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 reinitialization... 00:06:37.585 spdk_app_start is called in Round 2. 00:06:37.585 Shutdown signal received, stop current app iteration 00:06:37.585 Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 reinitialization... 00:06:37.585 spdk_app_start is called in Round 3. 00:06:37.585 Shutdown signal received, stop current app iteration 00:06:37.585 13:45:32 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:37.585 13:45:32 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:37.585 00:06:37.585 real 0m17.938s 00:06:37.585 user 0m38.944s 00:06:37.585 sys 0m3.221s 00:06:37.585 13:45:32 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.585 13:45:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.585 ************************************ 00:06:37.585 END TEST app_repeat 00:06:37.585 ************************************ 00:06:37.585 13:45:32 event -- common/autotest_common.sh@1142 -- # return 0 00:06:37.585 13:45:32 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:37.585 13:45:32 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:37.585 13:45:32 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:37.585 13:45:32 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.585 13:45:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.842 ************************************ 00:06:37.842 START TEST cpu_locks 00:06:37.842 ************************************ 00:06:37.842 13:45:32 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:37.842 * Looking for test storage... 00:06:37.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:37.842 13:45:32 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:37.842 13:45:32 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:37.842 13:45:32 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:37.842 13:45:32 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:37.842 13:45:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:37.842 13:45:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.842 13:45:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.842 ************************************ 00:06:37.842 START TEST default_locks 00:06:37.842 ************************************ 00:06:37.842 13:45:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:37.842 13:45:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3639866 00:06:37.842 13:45:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.842 13:45:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3639866 00:06:37.842 13:45:32 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3639866 ']' 00:06:37.842 13:45:32 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.842 13:45:32 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.842 13:45:32 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.842 13:45:32 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.842 13:45:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.842 [2024-07-15 13:45:32.567762] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:06:37.842 [2024-07-15 13:45:32.567839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3639866 ] 00:06:37.842 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.842 [2024-07-15 13:45:32.624819] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.101 [2024-07-15 13:45:32.731254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.359 13:45:32 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.359 13:45:32 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:38.359 13:45:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3639866 00:06:38.359 13:45:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3639866 00:06:38.359 13:45:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.618 lslocks: write error 00:06:38.618 13:45:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3639866 00:06:38.618 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 3639866 ']' 00:06:38.618 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 3639866 00:06:38.618 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:38.618 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:38.618 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3639866 00:06:38.618 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:38.618 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:38.618 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3639866' 00:06:38.618 killing process with pid 3639866 00:06:38.618 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 3639866 00:06:38.618 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 3639866 00:06:39.186 13:45:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3639866 00:06:39.186 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:39.186 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3639866 00:06:39.186 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:39.186 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.186 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:39.186 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.186 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3639866 00:06:39.186 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3639866 ']' 00:06:39.186 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.186 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.186 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.186 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.186 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3639866) - No such process 00:06:39.186 ERROR: process (pid: 3639866) is no longer running 00:06:39.186 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.186 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:39.186 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:39.186 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:39.186 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:39.186 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:39.186 13:45:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:39.186 13:45:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:39.186 13:45:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:39.186 13:45:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:39.186 00:06:39.186 real 0m1.222s 00:06:39.186 user 0m1.155s 00:06:39.186 sys 0m0.509s 00:06:39.186 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.186 13:45:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.186 ************************************ 00:06:39.186 END TEST default_locks 00:06:39.186 ************************************ 00:06:39.186 13:45:33 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:39.186 13:45:33 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:39.186 13:45:33 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:39.186 13:45:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.186 13:45:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.186 ************************************ 00:06:39.186 START TEST default_locks_via_rpc 00:06:39.186 ************************************ 00:06:39.186 13:45:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:39.186 13:45:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3640029 00:06:39.186 13:45:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.186 13:45:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3640029 00:06:39.186 13:45:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3640029 ']' 00:06:39.186 13:45:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.186 13:45:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.186 13:45:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.186 13:45:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.186 13:45:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.186 [2024-07-15 13:45:33.841860] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:06:39.186 [2024-07-15 13:45:33.841958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3640029 ] 00:06:39.186 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.186 [2024-07-15 13:45:33.902300] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.187 [2024-07-15 13:45:34.010345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.445 13:45:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.445 13:45:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:39.445 13:45:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:39.445 13:45:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.445 13:45:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.445 13:45:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.446 13:45:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:39.446 13:45:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:39.446 13:45:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:39.446 13:45:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:39.446 13:45:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:39.446 13:45:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.446 13:45:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.446 13:45:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.446 13:45:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3640029 00:06:39.446 13:45:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3640029 00:06:39.446 13:45:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.016 13:45:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3640029 00:06:40.016 13:45:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 3640029 ']' 00:06:40.016 13:45:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 3640029 00:06:40.016 13:45:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:40.016 13:45:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:40.016 13:45:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3640029 00:06:40.016 13:45:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:40.016 13:45:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:40.016 13:45:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3640029' 00:06:40.016 killing process with pid 3640029 00:06:40.016 13:45:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 3640029 00:06:40.016 13:45:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 3640029 00:06:40.276 00:06:40.276 real 0m1.202s 00:06:40.276 user 0m1.163s 00:06:40.276 sys 0m0.476s 00:06:40.276 13:45:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.276 13:45:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.276 ************************************ 00:06:40.276 END TEST default_locks_via_rpc 00:06:40.276 ************************************ 00:06:40.276 13:45:35 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:40.276 13:45:35 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:40.276 13:45:35 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:40.276 13:45:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.276 13:45:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.276 ************************************ 00:06:40.276 START TEST non_locking_app_on_locked_coremask 00:06:40.276 ************************************ 00:06:40.276 13:45:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:40.276 13:45:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3640251 00:06:40.276 13:45:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.276 13:45:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3640251 /var/tmp/spdk.sock 00:06:40.276 13:45:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3640251 ']' 00:06:40.276 13:45:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.276 13:45:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.276 13:45:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.276 13:45:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.276 13:45:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.276 [2024-07-15 13:45:35.094994] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:06:40.276 [2024-07-15 13:45:35.095107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3640251 ] 00:06:40.537 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.537 [2024-07-15 13:45:35.153801] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.537 [2024-07-15 13:45:35.264507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.796 13:45:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.796 13:45:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:40.796 13:45:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3640320 00:06:40.796 13:45:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:40.796 13:45:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3640320 /var/tmp/spdk2.sock 00:06:40.796 13:45:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3640320 ']' 00:06:40.796 13:45:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.796 13:45:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.796 13:45:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.796 13:45:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.796 13:45:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.796 [2024-07-15 13:45:35.549610] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:06:40.796 [2024-07-15 13:45:35.549698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3640320 ] 00:06:40.796 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.796 [2024-07-15 13:45:35.632228] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.796 [2024-07-15 13:45:35.632253] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.055 [2024-07-15 13:45:35.840441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.988 13:45:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.988 13:45:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:41.988 13:45:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3640251 00:06:41.988 13:45:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3640251 00:06:41.988 13:45:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.553 lslocks: write error 00:06:42.553 13:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3640251 00:06:42.553 13:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3640251 ']' 00:06:42.553 13:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3640251 00:06:42.553 13:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:42.553 13:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.553 13:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3640251 00:06:42.553 13:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:42.553 13:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:42.553 13:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3640251' 00:06:42.553 killing process with pid 3640251 00:06:42.553 13:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3640251 00:06:42.553 13:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3640251 00:06:43.120 13:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3640320 00:06:43.120 13:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3640320 ']' 00:06:43.379 13:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3640320 00:06:43.379 13:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:43.379 13:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.379 13:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3640320 00:06:43.379 13:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:43.379 13:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:43.379 13:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3640320' 00:06:43.379 killing process with pid 3640320 00:06:43.379 13:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3640320 00:06:43.379 13:45:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3640320 00:06:43.636 00:06:43.636 real 0m3.364s 00:06:43.636 user 0m3.557s 00:06:43.636 sys 0m1.046s 00:06:43.636 13:45:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.636 13:45:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.636 ************************************ 00:06:43.636 END TEST non_locking_app_on_locked_coremask 00:06:43.636 ************************************ 00:06:43.636 13:45:38 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:43.636 13:45:38 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:43.636 13:45:38 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.636 13:45:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.636 13:45:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.636 ************************************ 00:06:43.636 START TEST locking_app_on_unlocked_coremask 00:06:43.636 ************************************ 00:06:43.636 13:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:43.636 13:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3640679 00:06:43.636 13:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:43.636 13:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3640679 /var/tmp/spdk.sock 00:06:43.636 13:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3640679 ']' 00:06:43.636 13:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.636 13:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.636 13:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.636 13:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.636 13:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.895 [2024-07-15 13:45:38.510960] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:06:43.895 [2024-07-15 13:45:38.511057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3640679 ] 00:06:43.895 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.895 [2024-07-15 13:45:38.568521] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.895 [2024-07-15 13:45:38.568559] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.895 [2024-07-15 13:45:38.679665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.153 13:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.153 13:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:44.153 13:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3640754 00:06:44.153 13:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:44.153 13:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3640754 /var/tmp/spdk2.sock 00:06:44.153 13:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3640754 ']' 00:06:44.153 13:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.153 13:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.153 13:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.153 13:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.153 13:45:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.153 [2024-07-15 13:45:38.963925] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:06:44.153 [2024-07-15 13:45:38.964017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3640754 ] 00:06:44.153 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.410 [2024-07-15 13:45:39.047440] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.670 [2024-07-15 13:45:39.255362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.240 13:45:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.240 13:45:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:45.240 13:45:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3640754 00:06:45.240 13:45:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3640754 00:06:45.240 13:45:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.823 lslocks: write error 00:06:45.823 13:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3640679 00:06:45.823 13:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3640679 ']' 00:06:45.823 13:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3640679 00:06:45.823 13:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:45.823 13:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:45.823 13:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3640679 00:06:45.823 13:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:45.823 13:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:45.823 13:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3640679' 00:06:45.823 killing process with pid 3640679 00:06:45.823 13:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3640679 00:06:45.823 13:45:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3640679 00:06:46.782 13:45:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3640754 00:06:46.782 13:45:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3640754 ']' 00:06:46.782 13:45:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3640754 00:06:46.782 13:45:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:46.782 13:45:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:46.782 13:45:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3640754 00:06:46.782 13:45:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:46.782 13:45:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:46.782 13:45:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3640754' 00:06:46.782 killing process with pid 3640754 00:06:46.782 13:45:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3640754 00:06:46.782 13:45:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3640754 00:06:47.040 00:06:47.040 real 0m3.318s 00:06:47.040 user 0m3.462s 00:06:47.040 sys 0m1.054s 00:06:47.040 13:45:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.040 13:45:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.040 ************************************ 00:06:47.040 END TEST locking_app_on_unlocked_coremask 00:06:47.040 ************************************ 00:06:47.040 13:45:41 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:47.040 13:45:41 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:47.040 13:45:41 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.040 13:45:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.040 13:45:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.040 ************************************ 00:06:47.040 START TEST locking_app_on_locked_coremask 00:06:47.040 ************************************ 00:06:47.040 13:45:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:47.040 13:45:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3641079 00:06:47.040 13:45:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:47.040 13:45:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3641079 /var/tmp/spdk.sock 00:06:47.040 13:45:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3641079 ']' 00:06:47.040 13:45:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.040 13:45:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.040 13:45:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.040 13:45:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.040 13:45:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.040 [2024-07-15 13:45:41.880379] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:06:47.040 [2024-07-15 13:45:41.880484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3641079 ] 00:06:47.298 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.298 [2024-07-15 13:45:41.940484] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.298 [2024-07-15 13:45:42.042996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.557 13:45:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.557 13:45:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:47.557 13:45:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3641191 00:06:47.557 13:45:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:47.557 13:45:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3641191 /var/tmp/spdk2.sock 00:06:47.557 13:45:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:47.557 13:45:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3641191 /var/tmp/spdk2.sock 00:06:47.557 13:45:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:47.557 13:45:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.557 13:45:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:47.557 13:45:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.557 13:45:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3641191 /var/tmp/spdk2.sock 00:06:47.557 13:45:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3641191 ']' 00:06:47.557 13:45:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.557 13:45:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.557 13:45:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.557 13:45:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.557 13:45:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.557 [2024-07-15 13:45:42.341700] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:06:47.557 [2024-07-15 13:45:42.341815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3641191 ] 00:06:47.557 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.817 [2024-07-15 13:45:42.426209] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3641079 has claimed it. 00:06:47.817 [2024-07-15 13:45:42.426277] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:48.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3641191) - No such process 00:06:48.386 ERROR: process (pid: 3641191) is no longer running 00:06:48.386 13:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.386 13:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:48.386 13:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:48.386 13:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.386 13:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:48.386 13:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.386 13:45:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3641079 00:06:48.386 13:45:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3641079 00:06:48.386 13:45:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.645 lslocks: write error 00:06:48.645 13:45:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3641079 00:06:48.645 13:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3641079 ']' 00:06:48.645 13:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3641079 00:06:48.645 13:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:48.645 13:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:48.645 13:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3641079 00:06:48.645 13:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:48.645 13:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:48.645 13:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3641079' 00:06:48.645 killing process with pid 3641079 00:06:48.645 13:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3641079 00:06:48.645 13:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3641079 00:06:49.212 00:06:49.212 real 0m1.958s 00:06:49.212 user 0m2.113s 00:06:49.212 sys 0m0.623s 00:06:49.212 13:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.212 13:45:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.212 ************************************ 00:06:49.212 END TEST locking_app_on_locked_coremask 00:06:49.212 ************************************ 00:06:49.212 13:45:43 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:49.212 13:45:43 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:49.212 13:45:43 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.212 13:45:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.212 13:45:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.212 ************************************ 00:06:49.212 START TEST locking_overlapped_coremask 00:06:49.212 ************************************ 00:06:49.212 13:45:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:49.212 13:45:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3641366 00:06:49.212 13:45:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:49.212 13:45:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3641366 /var/tmp/spdk.sock 00:06:49.212 13:45:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3641366 ']' 00:06:49.212 13:45:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.212 13:45:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.212 13:45:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.212 13:45:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.212 13:45:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.212 [2024-07-15 13:45:43.891897] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:06:49.212 [2024-07-15 13:45:43.891999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3641366 ] 00:06:49.212 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.212 [2024-07-15 13:45:43.949671] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.471 [2024-07-15 13:45:44.062758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.471 [2024-07-15 13:45:44.062818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.471 [2024-07-15 13:45:44.062821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.731 13:45:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.731 13:45:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:49.731 13:45:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3641486 00:06:49.731 13:45:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3641486 /var/tmp/spdk2.sock 00:06:49.731 13:45:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:49.731 13:45:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3641486 /var/tmp/spdk2.sock 00:06:49.731 13:45:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:49.731 13:45:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:49.731 13:45:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:49.731 13:45:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:49.731 13:45:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:49.731 13:45:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3641486 /var/tmp/spdk2.sock 00:06:49.731 13:45:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3641486 ']' 00:06:49.731 13:45:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.731 13:45:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.731 13:45:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.732 13:45:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.732 13:45:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.732 [2024-07-15 13:45:44.366516] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:06:49.732 [2024-07-15 13:45:44.366597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3641486 ] 00:06:49.732 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.732 [2024-07-15 13:45:44.454893] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3641366 has claimed it. 00:06:49.732 [2024-07-15 13:45:44.454943] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:50.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3641486) - No such process 00:06:50.296 ERROR: process (pid: 3641486) is no longer running 00:06:50.296 13:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.296 13:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:50.296 13:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:50.296 13:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:50.296 13:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:50.296 13:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:50.296 13:45:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:50.296 13:45:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:50.296 13:45:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:50.296 13:45:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:50.296 13:45:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3641366 00:06:50.296 13:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 3641366 ']' 00:06:50.296 13:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 3641366 00:06:50.296 13:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:50.296 13:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:50.296 13:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3641366 00:06:50.296 13:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:50.296 13:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:50.296 13:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3641366' 00:06:50.296 killing process with pid 3641366 00:06:50.296 13:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 3641366 00:06:50.296 13:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 3641366 00:06:50.864 00:06:50.864 real 0m1.690s 00:06:50.864 user 0m4.510s 00:06:50.864 sys 0m0.435s 00:06:50.864 13:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.865 13:45:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.865 ************************************ 00:06:50.865 END TEST locking_overlapped_coremask 00:06:50.865 ************************************ 00:06:50.865 13:45:45 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:50.865 13:45:45 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:50.865 13:45:45 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:50.865 13:45:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.865 13:45:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.865 ************************************ 00:06:50.865 START TEST locking_overlapped_coremask_via_rpc 00:06:50.865 ************************************ 00:06:50.865 13:45:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:50.865 13:45:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3641656 00:06:50.865 13:45:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:50.865 13:45:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3641656 /var/tmp/spdk.sock 00:06:50.865 13:45:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3641656 ']' 00:06:50.865 13:45:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.865 13:45:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.865 13:45:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.865 13:45:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.865 13:45:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.865 [2024-07-15 13:45:45.628832] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:06:50.865 [2024-07-15 13:45:45.628933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3641656 ] 00:06:50.865 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.865 [2024-07-15 13:45:45.690259] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.865 [2024-07-15 13:45:45.690295] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.125 [2024-07-15 13:45:45.803655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.125 [2024-07-15 13:45:45.806757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.125 [2024-07-15 13:45:45.806823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.382 13:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.382 13:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:51.382 13:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3641669 00:06:51.382 13:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3641669 /var/tmp/spdk2.sock 00:06:51.382 13:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3641669 ']' 00:06:51.382 13:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.382 13:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.382 13:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.382 13:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.382 13:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:51.382 13:45:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.382 [2024-07-15 13:45:46.102815] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:06:51.382 [2024-07-15 13:45:46.102898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3641669 ] 00:06:51.382 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.382 [2024-07-15 13:45:46.190420] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.382 [2024-07-15 13:45:46.190452] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.639 [2024-07-15 13:45:46.415458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.639 [2024-07-15 13:45:46.415520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:51.639 [2024-07-15 13:45:46.415523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.214 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.214 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:52.214 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:52.214 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.214 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.214 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.215 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.215 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:52.215 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.215 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:52.215 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.215 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:52.215 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.215 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.215 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.215 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.215 [2024-07-15 13:45:47.045847] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3641656 has claimed it. 00:06:52.215 request: 00:06:52.215 { 00:06:52.215 "method": "framework_enable_cpumask_locks", 00:06:52.215 "req_id": 1 00:06:52.215 } 00:06:52.215 Got JSON-RPC error response 00:06:52.215 response: 00:06:52.215 { 00:06:52.215 "code": -32603, 00:06:52.215 "message": "Failed to claim CPU core: 2" 00:06:52.215 } 00:06:52.472 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:52.472 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:52.472 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:52.472 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:52.472 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:52.472 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3641656 /var/tmp/spdk.sock 00:06:52.472 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3641656 ']' 00:06:52.472 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.472 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.472 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.472 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.472 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.472 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.472 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:52.472 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3641669 /var/tmp/spdk2.sock 00:06:52.472 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3641669 ']' 00:06:52.472 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.472 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.472 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.472 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.472 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.729 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.730 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:52.730 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:52.730 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:52.730 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:52.730 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:52.730 00:06:52.730 real 0m1.963s 00:06:52.730 user 0m1.018s 00:06:52.730 sys 0m0.161s 00:06:52.730 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.730 13:45:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.730 ************************************ 00:06:52.730 END TEST locking_overlapped_coremask_via_rpc 00:06:52.730 ************************************ 00:06:52.730 13:45:47 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:52.730 13:45:47 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:52.730 13:45:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3641656 ]] 00:06:52.730 13:45:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3641656 00:06:52.730 13:45:47 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3641656 ']' 00:06:52.730 13:45:47 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3641656 00:06:52.730 13:45:47 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:52.730 13:45:47 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:52.730 13:45:47 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3641656 00:06:52.987 13:45:47 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:52.987 13:45:47 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:52.987 13:45:47 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3641656' 00:06:52.987 killing process with pid 3641656 00:06:52.987 13:45:47 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3641656 00:06:52.987 13:45:47 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3641656 00:06:53.246 13:45:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3641669 ]] 00:06:53.246 13:45:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3641669 00:06:53.246 13:45:48 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3641669 ']' 00:06:53.246 13:45:48 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3641669 00:06:53.246 13:45:48 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:53.246 13:45:48 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:53.246 13:45:48 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3641669 00:06:53.246 13:45:48 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:53.246 13:45:48 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:53.246 13:45:48 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3641669' 00:06:53.246 killing process with pid 3641669 00:06:53.246 13:45:48 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3641669 00:06:53.246 13:45:48 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3641669 00:06:53.814 13:45:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:53.814 13:45:48 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:53.814 13:45:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3641656 ]] 00:06:53.814 13:45:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3641656 00:06:53.814 13:45:48 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3641656 ']' 00:06:53.814 13:45:48 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3641656 00:06:53.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3641656) - No such process 00:06:53.815 13:45:48 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3641656 is not found' 00:06:53.815 Process with pid 3641656 is not found 00:06:53.815 13:45:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3641669 ]] 00:06:53.815 13:45:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3641669 00:06:53.815 13:45:48 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3641669 ']' 00:06:53.815 13:45:48 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3641669 00:06:53.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3641669) - No such process 00:06:53.815 13:45:48 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3641669 is not found' 00:06:53.815 Process with pid 3641669 is not found 00:06:53.815 13:45:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:53.815 00:06:53.815 real 0m16.064s 00:06:53.815 user 0m27.720s 00:06:53.815 sys 0m5.200s 00:06:53.815 13:45:48 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.815 13:45:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.815 ************************************ 00:06:53.815 END TEST cpu_locks 00:06:53.815 ************************************ 00:06:53.815 13:45:48 event -- common/autotest_common.sh@1142 -- # return 0 00:06:53.815 00:06:53.815 real 0m40.014s 00:06:53.815 user 1m15.645s 00:06:53.815 sys 0m9.207s 00:06:53.815 13:45:48 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.815 13:45:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:53.815 ************************************ 00:06:53.815 END TEST event 00:06:53.815 ************************************ 00:06:53.815 13:45:48 -- common/autotest_common.sh@1142 -- # return 0 00:06:53.815 13:45:48 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:53.815 13:45:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:53.815 13:45:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.815 13:45:48 -- common/autotest_common.sh@10 -- # set +x 00:06:53.815 ************************************ 00:06:53.815 START TEST thread 00:06:53.815 ************************************ 00:06:53.815 13:45:48 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:53.815 * Looking for test storage... 00:06:53.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:53.815 13:45:48 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:53.815 13:45:48 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:53.815 13:45:48 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.815 13:45:48 thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.074 ************************************ 00:06:54.074 START TEST thread_poller_perf 00:06:54.074 ************************************ 00:06:54.074 13:45:48 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:54.074 [2024-07-15 13:45:48.666988] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:06:54.074 [2024-07-15 13:45:48.667071] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642050 ] 00:06:54.074 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.074 [2024-07-15 13:45:48.727449] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.074 [2024-07-15 13:45:48.833662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.074 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:55.453 ====================================== 00:06:55.453 busy:2713454034 (cyc) 00:06:55.453 total_run_count: 367000 00:06:55.453 tsc_hz: 2700000000 (cyc) 00:06:55.453 ====================================== 00:06:55.453 poller_cost: 7393 (cyc), 2738 (nsec) 00:06:55.453 00:06:55.453 real 0m1.299s 00:06:55.453 user 0m1.213s 00:06:55.453 sys 0m0.078s 00:06:55.453 13:45:49 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.453 13:45:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.453 ************************************ 00:06:55.453 END TEST thread_poller_perf 00:06:55.453 ************************************ 00:06:55.453 13:45:49 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:55.453 13:45:49 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:55.453 13:45:49 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:55.453 13:45:49 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.453 13:45:49 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.453 ************************************ 00:06:55.453 START TEST thread_poller_perf 00:06:55.453 ************************************ 00:06:55.453 13:45:50 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:55.453 [2024-07-15 13:45:50.016349] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:06:55.453 [2024-07-15 13:45:50.016437] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642308 ] 00:06:55.453 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.453 [2024-07-15 13:45:50.078976] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.453 [2024-07-15 13:45:50.184201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.453 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:56.830 ====================================== 00:06:56.830 busy:2702170323 (cyc) 00:06:56.830 total_run_count: 4845000 00:06:56.830 tsc_hz: 2700000000 (cyc) 00:06:56.830 ====================================== 00:06:56.830 poller_cost: 557 (cyc), 206 (nsec) 00:06:56.830 00:06:56.830 real 0m1.292s 00:06:56.830 user 0m1.201s 00:06:56.830 sys 0m0.085s 00:06:56.830 13:45:51 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.830 13:45:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:56.830 ************************************ 00:06:56.830 END TEST thread_poller_perf 00:06:56.830 ************************************ 00:06:56.830 13:45:51 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:56.830 13:45:51 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:56.830 00:06:56.830 real 0m2.741s 00:06:56.830 user 0m2.477s 00:06:56.830 sys 0m0.261s 00:06:56.830 13:45:51 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.830 13:45:51 thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.830 ************************************ 00:06:56.830 END TEST thread 00:06:56.830 ************************************ 00:06:56.830 13:45:51 -- common/autotest_common.sh@1142 -- # return 0 00:06:56.830 13:45:51 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:56.830 13:45:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.830 13:45:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.830 13:45:51 -- common/autotest_common.sh@10 -- # set +x 00:06:56.830 ************************************ 00:06:56.830 START TEST accel 00:06:56.830 ************************************ 00:06:56.830 13:45:51 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:56.830 * Looking for test storage... 00:06:56.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:56.830 13:45:51 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:56.830 13:45:51 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:56.830 13:45:51 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:56.830 13:45:51 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3642504 00:06:56.830 13:45:51 accel -- accel/accel.sh@63 -- # waitforlisten 3642504 00:06:56.830 13:45:51 accel -- common/autotest_common.sh@829 -- # '[' -z 3642504 ']' 00:06:56.830 13:45:51 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.830 13:45:51 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:56.830 13:45:51 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:56.830 13:45:51 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.830 13:45:51 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.830 13:45:51 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.830 13:45:51 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.830 13:45:51 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.830 13:45:51 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.830 13:45:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.830 13:45:51 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.830 13:45:51 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.830 13:45:51 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:56.830 13:45:51 accel -- accel/accel.sh@41 -- # jq -r . 00:06:56.830 [2024-07-15 13:45:51.461498] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:06:56.830 [2024-07-15 13:45:51.461578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642504 ] 00:06:56.830 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.830 [2024-07-15 13:45:51.520080] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.830 [2024-07-15 13:45:51.629853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.088 13:45:51 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.088 13:45:51 accel -- common/autotest_common.sh@862 -- # return 0 00:06:57.088 13:45:51 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:57.088 13:45:51 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:57.088 13:45:51 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:57.088 13:45:51 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:57.088 13:45:51 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:57.088 13:45:51 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:57.088 13:45:51 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.088 13:45:51 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:57.088 13:45:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.088 13:45:51 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.088 13:45:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.088 13:45:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.088 13:45:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.088 13:45:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.088 13:45:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.088 13:45:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.088 13:45:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.088 13:45:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.088 13:45:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.088 13:45:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.088 13:45:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.088 13:45:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.088 13:45:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.088 13:45:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.088 13:45:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.088 13:45:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.088 13:45:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.088 13:45:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.088 13:45:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.088 13:45:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.088 13:45:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.088 13:45:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.088 13:45:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.088 13:45:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.088 13:45:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.088 13:45:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.088 13:45:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.089 13:45:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.089 13:45:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.089 13:45:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.089 13:45:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.089 13:45:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.089 13:45:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.089 13:45:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.089 13:45:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.089 13:45:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.089 13:45:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.089 13:45:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.089 13:45:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.089 13:45:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.089 13:45:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.089 13:45:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.089 13:45:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.089 13:45:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.089 13:45:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.089 13:45:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.089 13:45:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.089 13:45:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.089 13:45:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.089 13:45:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.089 13:45:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.089 13:45:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.089 13:45:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.089 13:45:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.089 13:45:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.089 13:45:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.089 13:45:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.089 13:45:51 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.089 13:45:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.089 13:45:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.089 13:45:51 accel -- accel/accel.sh@75 -- # killprocess 3642504 00:06:57.089 13:45:51 accel -- common/autotest_common.sh@948 -- # '[' -z 3642504 ']' 00:06:57.089 13:45:51 accel -- common/autotest_common.sh@952 -- # kill -0 3642504 00:06:57.089 13:45:51 accel -- common/autotest_common.sh@953 -- # uname 00:06:57.089 13:45:51 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.089 13:45:51 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3642504 00:06:57.362 13:45:51 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:57.362 13:45:51 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:57.362 13:45:51 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3642504' 00:06:57.362 killing process with pid 3642504 00:06:57.362 13:45:51 accel -- common/autotest_common.sh@967 -- # kill 3642504 00:06:57.362 13:45:51 accel -- common/autotest_common.sh@972 -- # wait 3642504 00:06:57.622 13:45:52 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:57.622 13:45:52 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:57.622 13:45:52 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:57.622 13:45:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.622 13:45:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.622 13:45:52 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:57.622 13:45:52 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:57.622 13:45:52 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:57.622 13:45:52 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.622 13:45:52 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.622 13:45:52 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.622 13:45:52 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.622 13:45:52 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.622 13:45:52 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:57.622 13:45:52 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:57.622 13:45:52 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.622 13:45:52 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:57.622 13:45:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:57.622 13:45:52 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:57.622 13:45:52 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:57.622 13:45:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.622 13:45:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.881 ************************************ 00:06:57.881 START TEST accel_missing_filename 00:06:57.881 ************************************ 00:06:57.881 13:45:52 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:57.881 13:45:52 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:57.881 13:45:52 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:57.881 13:45:52 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:57.881 13:45:52 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.881 13:45:52 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:57.881 13:45:52 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.881 13:45:52 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:57.881 13:45:52 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:57.881 13:45:52 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:57.881 13:45:52 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.881 13:45:52 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.881 13:45:52 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.881 13:45:52 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.881 13:45:52 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.881 13:45:52 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:57.881 13:45:52 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:57.881 [2024-07-15 13:45:52.483997] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:06:57.881 [2024-07-15 13:45:52.484073] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642671 ] 00:06:57.881 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.881 [2024-07-15 13:45:52.541812] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.882 [2024-07-15 13:45:52.646818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.882 [2024-07-15 13:45:52.704989] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.141 [2024-07-15 13:45:52.788880] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:58.141 A filename is required. 00:06:58.141 13:45:52 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:58.141 13:45:52 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:58.141 13:45:52 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:58.141 13:45:52 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:58.141 13:45:52 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:58.141 13:45:52 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:58.141 00:06:58.141 real 0m0.435s 00:06:58.141 user 0m0.336s 00:06:58.141 sys 0m0.133s 00:06:58.141 13:45:52 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.141 13:45:52 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:58.141 ************************************ 00:06:58.141 END TEST accel_missing_filename 00:06:58.141 ************************************ 00:06:58.141 13:45:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.141 13:45:52 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:58.141 13:45:52 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:58.141 13:45:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.141 13:45:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.141 ************************************ 00:06:58.141 START TEST accel_compress_verify 00:06:58.141 ************************************ 00:06:58.141 13:45:52 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:58.141 13:45:52 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:58.141 13:45:52 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:58.141 13:45:52 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:58.141 13:45:52 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.141 13:45:52 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:58.141 13:45:52 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.141 13:45:52 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:58.141 13:45:52 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:58.141 13:45:52 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:58.141 13:45:52 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.141 13:45:52 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.141 13:45:52 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.141 13:45:52 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.141 13:45:52 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.141 13:45:52 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:58.141 13:45:52 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:58.141 [2024-07-15 13:45:52.968756] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:06:58.141 [2024-07-15 13:45:52.968820] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642706 ] 00:06:58.398 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.398 [2024-07-15 13:45:53.024904] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.398 [2024-07-15 13:45:53.129216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.398 [2024-07-15 13:45:53.188343] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.655 [2024-07-15 13:45:53.266895] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:58.655 00:06:58.655 Compression does not support the verify option, aborting. 00:06:58.656 13:45:53 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:58.656 13:45:53 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:58.656 13:45:53 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:58.656 13:45:53 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:58.656 13:45:53 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:58.656 13:45:53 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:58.656 00:06:58.656 real 0m0.431s 00:06:58.656 user 0m0.335s 00:06:58.656 sys 0m0.130s 00:06:58.656 13:45:53 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.656 13:45:53 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:58.656 ************************************ 00:06:58.656 END TEST accel_compress_verify 00:06:58.656 ************************************ 00:06:58.656 13:45:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.656 13:45:53 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:58.656 13:45:53 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:58.656 13:45:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.656 13:45:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.656 ************************************ 00:06:58.656 START TEST accel_wrong_workload 00:06:58.656 ************************************ 00:06:58.656 13:45:53 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:58.656 13:45:53 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:58.656 13:45:53 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:58.656 13:45:53 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:58.656 13:45:53 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.656 13:45:53 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:58.656 13:45:53 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.656 13:45:53 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:58.656 13:45:53 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:58.656 13:45:53 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:58.656 13:45:53 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.656 13:45:53 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.656 13:45:53 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.656 13:45:53 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.656 13:45:53 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.656 13:45:53 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:58.656 13:45:53 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:58.656 Unsupported workload type: foobar 00:06:58.656 [2024-07-15 13:45:53.447916] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:58.656 accel_perf options: 00:06:58.656 [-h help message] 00:06:58.656 [-q queue depth per core] 00:06:58.656 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:58.656 [-T number of threads per core 00:06:58.656 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:58.656 [-t time in seconds] 00:06:58.656 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:58.656 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:58.656 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:58.656 [-l for compress/decompress workloads, name of uncompressed input file 00:06:58.656 [-S for crc32c workload, use this seed value (default 0) 00:06:58.656 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:58.656 [-f for fill workload, use this BYTE value (default 255) 00:06:58.656 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:58.656 [-y verify result if this switch is on] 00:06:58.656 [-a tasks to allocate per core (default: same value as -q)] 00:06:58.656 Can be used to spread operations across a wider range of memory. 00:06:58.656 13:45:53 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:58.656 13:45:53 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:58.656 13:45:53 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:58.656 13:45:53 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:58.656 00:06:58.656 real 0m0.024s 00:06:58.656 user 0m0.013s 00:06:58.656 sys 0m0.011s 00:06:58.656 13:45:53 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.656 13:45:53 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:58.656 ************************************ 00:06:58.656 END TEST accel_wrong_workload 00:06:58.656 ************************************ 00:06:58.656 Error: writing output failed: Broken pipe 00:06:58.656 13:45:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.656 13:45:53 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:58.656 13:45:53 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:58.656 13:45:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.656 13:45:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.915 ************************************ 00:06:58.915 START TEST accel_negative_buffers 00:06:58.915 ************************************ 00:06:58.915 13:45:53 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:58.915 13:45:53 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:58.915 13:45:53 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:58.915 13:45:53 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:58.915 13:45:53 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.915 13:45:53 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:58.915 13:45:53 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.915 13:45:53 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:58.915 13:45:53 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:58.915 13:45:53 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:58.915 13:45:53 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.915 13:45:53 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.915 13:45:53 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.915 13:45:53 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.915 13:45:53 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.915 13:45:53 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:58.915 13:45:53 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:58.915 -x option must be non-negative. 00:06:58.915 [2024-07-15 13:45:53.520873] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:58.915 accel_perf options: 00:06:58.915 [-h help message] 00:06:58.915 [-q queue depth per core] 00:06:58.915 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:58.915 [-T number of threads per core 00:06:58.915 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:58.915 [-t time in seconds] 00:06:58.915 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:58.915 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:58.915 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:58.915 [-l for compress/decompress workloads, name of uncompressed input file 00:06:58.915 [-S for crc32c workload, use this seed value (default 0) 00:06:58.915 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:58.915 [-f for fill workload, use this BYTE value (default 255) 00:06:58.915 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:58.915 [-y verify result if this switch is on] 00:06:58.915 [-a tasks to allocate per core (default: same value as -q)] 00:06:58.915 Can be used to spread operations across a wider range of memory. 00:06:58.915 13:45:53 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:58.915 13:45:53 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:58.915 13:45:53 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:58.915 13:45:53 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:58.915 00:06:58.915 real 0m0.024s 00:06:58.915 user 0m0.013s 00:06:58.915 sys 0m0.011s 00:06:58.915 13:45:53 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.915 13:45:53 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:58.915 ************************************ 00:06:58.915 END TEST accel_negative_buffers 00:06:58.915 ************************************ 00:06:58.915 Error: writing output failed: Broken pipe 00:06:58.915 13:45:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.915 13:45:53 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:58.915 13:45:53 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:58.915 13:45:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.915 13:45:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.915 ************************************ 00:06:58.915 START TEST accel_crc32c 00:06:58.915 ************************************ 00:06:58.915 13:45:53 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:58.915 13:45:53 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:58.915 13:45:53 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:58.915 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.915 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.915 13:45:53 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:58.915 13:45:53 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:58.915 13:45:53 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:58.915 13:45:53 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.915 13:45:53 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.915 13:45:53 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.915 13:45:53 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.915 13:45:53 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.915 13:45:53 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:58.915 13:45:53 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:58.915 [2024-07-15 13:45:53.590609] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:06:58.915 [2024-07-15 13:45:53.590670] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642882 ] 00:06:58.915 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.915 [2024-07-15 13:45:53.648699] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.915 [2024-07-15 13:45:53.754398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.174 13:45:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:59.175 13:45:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.175 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.175 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.175 13:45:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:59.175 13:45:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.175 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.175 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.175 13:45:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.175 13:45:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.175 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.175 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.175 13:45:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:59.175 13:45:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.175 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.175 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.175 13:45:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.175 13:45:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.175 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.175 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.175 13:45:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.175 13:45:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.175 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.175 13:45:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.555 13:45:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.555 13:45:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.555 13:45:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.555 13:45:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.555 13:45:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.555 13:45:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.555 13:45:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.555 13:45:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.555 13:45:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.555 13:45:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.555 13:45:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.555 13:45:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.555 13:45:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.555 13:45:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.555 13:45:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.555 13:45:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.555 13:45:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.555 13:45:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.555 13:45:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.555 13:45:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.555 13:45:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.555 13:45:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.555 13:45:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.555 13:45:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.555 13:45:55 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.555 13:45:55 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:00.555 13:45:55 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.555 00:07:00.555 real 0m1.436s 00:07:00.555 user 0m1.308s 00:07:00.555 sys 0m0.131s 00:07:00.555 13:45:55 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.555 13:45:55 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:00.555 ************************************ 00:07:00.555 END TEST accel_crc32c 00:07:00.555 ************************************ 00:07:00.555 13:45:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:00.555 13:45:55 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:00.555 13:45:55 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:00.555 13:45:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.555 13:45:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.555 ************************************ 00:07:00.555 START TEST accel_crc32c_C2 00:07:00.555 ************************************ 00:07:00.555 13:45:55 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:00.555 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.555 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:00.555 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.555 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.555 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:00.555 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:00.555 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.555 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.555 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.555 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.555 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.555 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.555 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:00.555 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:00.555 [2024-07-15 13:45:55.076289] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:07:00.555 [2024-07-15 13:45:55.076356] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3643046 ] 00:07:00.555 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.555 [2024-07-15 13:45:55.134183] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.555 [2024-07-15 13:45:55.238801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.555 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.555 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.555 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.555 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.555 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.555 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.555 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.556 13:45:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.938 13:45:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.938 13:45:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.938 13:45:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.938 13:45:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.938 13:45:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.938 13:45:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.938 13:45:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.938 13:45:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.938 13:45:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.938 13:45:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.938 13:45:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.938 13:45:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.938 13:45:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.938 13:45:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.938 13:45:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.938 13:45:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.938 13:45:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.938 13:45:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.938 13:45:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.938 13:45:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.938 13:45:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.938 13:45:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.938 13:45:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.938 13:45:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.938 13:45:56 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.938 13:45:56 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:01.938 13:45:56 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.938 00:07:01.938 real 0m1.437s 00:07:01.938 user 0m1.303s 00:07:01.938 sys 0m0.135s 00:07:01.938 13:45:56 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.938 13:45:56 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:01.938 ************************************ 00:07:01.938 END TEST accel_crc32c_C2 00:07:01.938 ************************************ 00:07:01.938 13:45:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:01.938 13:45:56 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:01.938 13:45:56 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:01.938 13:45:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.938 13:45:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.938 ************************************ 00:07:01.938 START TEST accel_copy 00:07:01.938 ************************************ 00:07:01.938 13:45:56 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:01.938 13:45:56 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:01.938 13:45:56 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:01.938 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.938 13:45:56 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:01.938 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.938 13:45:56 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:01.938 13:45:56 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:01.938 13:45:56 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.938 13:45:56 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.939 13:45:56 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.939 13:45:56 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.939 13:45:56 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.939 13:45:56 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:01.939 13:45:56 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:01.939 [2024-07-15 13:45:56.562302] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:07:01.939 [2024-07-15 13:45:56.562363] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3643201 ] 00:07:01.939 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.939 [2024-07-15 13:45:56.620363] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.939 [2024-07-15 13:45:56.727990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.197 13:45:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.574 13:45:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.574 13:45:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.574 13:45:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.574 13:45:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.574 13:45:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.574 13:45:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.574 13:45:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.574 13:45:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.574 13:45:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.574 13:45:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.574 13:45:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.574 13:45:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.574 13:45:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.574 13:45:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.574 13:45:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.574 13:45:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.574 13:45:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.574 13:45:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.574 13:45:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.574 13:45:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.574 13:45:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.574 13:45:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.574 13:45:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.574 13:45:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.574 13:45:57 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.574 13:45:57 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:03.574 13:45:57 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.574 00:07:03.574 real 0m1.434s 00:07:03.574 user 0m1.295s 00:07:03.574 sys 0m0.140s 00:07:03.574 13:45:57 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.574 13:45:57 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:03.574 ************************************ 00:07:03.574 END TEST accel_copy 00:07:03.574 ************************************ 00:07:03.574 13:45:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:03.574 13:45:58 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.574 13:45:58 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:03.574 13:45:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.574 13:45:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.574 ************************************ 00:07:03.574 START TEST accel_fill 00:07:03.574 ************************************ 00:07:03.574 13:45:58 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:03.574 [2024-07-15 13:45:58.047839] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:07:03.574 [2024-07-15 13:45:58.047907] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3643473 ] 00:07:03.574 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.574 [2024-07-15 13:45:58.107657] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.574 [2024-07-15 13:45:58.214593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.574 13:45:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.949 13:45:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.949 13:45:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.949 13:45:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.950 13:45:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.950 13:45:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.950 13:45:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.950 13:45:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.950 13:45:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.950 13:45:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.950 13:45:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.950 13:45:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.950 13:45:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.950 13:45:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.950 13:45:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.950 13:45:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.950 13:45:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.950 13:45:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.950 13:45:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.950 13:45:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.950 13:45:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.950 13:45:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.950 13:45:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.950 13:45:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.950 13:45:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.950 13:45:59 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.950 13:45:59 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:04.950 13:45:59 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.950 00:07:04.950 real 0m1.437s 00:07:04.950 user 0m1.300s 00:07:04.950 sys 0m0.137s 00:07:04.950 13:45:59 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.950 13:45:59 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:04.950 ************************************ 00:07:04.950 END TEST accel_fill 00:07:04.950 ************************************ 00:07:04.950 13:45:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:04.950 13:45:59 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:04.950 13:45:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:04.950 13:45:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.950 13:45:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.950 ************************************ 00:07:04.950 START TEST accel_copy_crc32c 00:07:04.950 ************************************ 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:04.950 [2024-07-15 13:45:59.533097] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:07:04.950 [2024-07-15 13:45:59.533161] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3643631 ] 00:07:04.950 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.950 [2024-07-15 13:45:59.590653] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.950 [2024-07-15 13:45:59.694175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.950 13:45:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.327 13:46:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.327 13:46:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.327 13:46:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.327 13:46:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.327 13:46:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.327 13:46:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.327 13:46:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.327 13:46:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.327 13:46:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.327 13:46:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.327 13:46:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.327 13:46:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.327 13:46:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.327 13:46:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.327 13:46:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.327 13:46:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.327 13:46:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.327 13:46:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.327 13:46:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.327 13:46:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.327 13:46:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.327 13:46:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.327 13:46:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.327 13:46:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.327 13:46:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.327 13:46:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:06.327 13:46:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.327 00:07:06.327 real 0m1.420s 00:07:06.327 user 0m1.300s 00:07:06.327 sys 0m0.122s 00:07:06.327 13:46:00 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.327 13:46:00 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:06.327 ************************************ 00:07:06.327 END TEST accel_copy_crc32c 00:07:06.327 ************************************ 00:07:06.327 13:46:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:06.327 13:46:00 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:06.327 13:46:00 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:06.327 13:46:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.327 13:46:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.327 ************************************ 00:07:06.327 START TEST accel_copy_crc32c_C2 00:07:06.327 ************************************ 00:07:06.327 13:46:00 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:06.327 13:46:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.327 13:46:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:06.327 13:46:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.327 13:46:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:06.327 13:46:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.327 13:46:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:06.327 13:46:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.327 13:46:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.327 13:46:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.327 13:46:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.327 13:46:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.327 13:46:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.327 13:46:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:06.327 13:46:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:06.327 [2024-07-15 13:46:01.000757] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:07:06.327 [2024-07-15 13:46:01.000819] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3643795 ] 00:07:06.327 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.327 [2024-07-15 13:46:01.057473] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.327 [2024-07-15 13:46:01.163473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.586 13:46:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.971 13:46:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.971 13:46:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.971 13:46:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.971 13:46:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.971 13:46:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.971 13:46:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.971 13:46:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.971 13:46:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.971 13:46:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.971 13:46:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.971 13:46:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.971 13:46:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.971 13:46:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.971 13:46:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.971 13:46:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.971 13:46:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.971 13:46:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.971 13:46:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.971 13:46:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.971 13:46:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.971 13:46:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.971 13:46:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.971 13:46:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.971 13:46:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.971 13:46:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.971 13:46:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:07.971 13:46:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.971 00:07:07.971 real 0m1.432s 00:07:07.971 user 0m1.305s 00:07:07.971 sys 0m0.129s 00:07:07.971 13:46:02 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.971 13:46:02 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:07.971 ************************************ 00:07:07.971 END TEST accel_copy_crc32c_C2 00:07:07.971 ************************************ 00:07:07.971 13:46:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.971 13:46:02 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:07.971 13:46:02 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:07.971 13:46:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.971 13:46:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.971 ************************************ 00:07:07.971 START TEST accel_dualcast 00:07:07.971 ************************************ 00:07:07.971 13:46:02 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:07.971 [2024-07-15 13:46:02.482313] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:07:07.971 [2024-07-15 13:46:02.482375] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3644063 ] 00:07:07.971 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.971 [2024-07-15 13:46:02.539888] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.971 [2024-07-15 13:46:02.644300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.971 13:46:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.972 13:46:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.344 13:46:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:09.344 13:46:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.345 13:46:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.345 13:46:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.345 13:46:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:09.345 13:46:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.345 13:46:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.345 13:46:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.345 13:46:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:09.345 13:46:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.345 13:46:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.345 13:46:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.345 13:46:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:09.345 13:46:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.345 13:46:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.345 13:46:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.345 13:46:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:09.345 13:46:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.345 13:46:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.345 13:46:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.345 13:46:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:09.345 13:46:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.345 13:46:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.345 13:46:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.345 13:46:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.345 13:46:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:09.345 13:46:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.345 00:07:09.345 real 0m1.426s 00:07:09.345 user 0m1.300s 00:07:09.345 sys 0m0.127s 00:07:09.345 13:46:03 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.345 13:46:03 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:09.345 ************************************ 00:07:09.345 END TEST accel_dualcast 00:07:09.345 ************************************ 00:07:09.345 13:46:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:09.345 13:46:03 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:09.345 13:46:03 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:09.345 13:46:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.345 13:46:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.345 ************************************ 00:07:09.345 START TEST accel_compare 00:07:09.345 ************************************ 00:07:09.345 13:46:03 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:09.345 13:46:03 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:09.345 13:46:03 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:09.345 13:46:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.345 13:46:03 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:09.345 13:46:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.345 13:46:03 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:09.345 13:46:03 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:09.345 13:46:03 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.345 13:46:03 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.345 13:46:03 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.345 13:46:03 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.345 13:46:03 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.345 13:46:03 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:09.345 13:46:03 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:09.345 [2024-07-15 13:46:03.958118] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:07:09.345 [2024-07-15 13:46:03.958179] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3644224 ] 00:07:09.345 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.345 [2024-07-15 13:46:04.015541] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.345 [2024-07-15 13:46:04.119613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:09.345 13:46:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.757 13:46:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.757 13:46:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.757 13:46:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.757 13:46:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.757 13:46:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.757 13:46:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.757 13:46:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.757 13:46:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.757 13:46:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.757 13:46:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.757 13:46:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.757 13:46:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.757 13:46:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.757 13:46:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.757 13:46:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.757 13:46:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.757 13:46:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.757 13:46:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.757 13:46:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.757 13:46:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.757 13:46:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.757 13:46:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.757 13:46:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.757 13:46:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.757 13:46:05 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.757 13:46:05 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:10.757 13:46:05 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.757 00:07:10.757 real 0m1.430s 00:07:10.757 user 0m1.301s 00:07:10.757 sys 0m0.131s 00:07:10.757 13:46:05 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.757 13:46:05 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:10.757 ************************************ 00:07:10.757 END TEST accel_compare 00:07:10.757 ************************************ 00:07:10.757 13:46:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.757 13:46:05 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:10.757 13:46:05 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:10.757 13:46:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.757 13:46:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.757 ************************************ 00:07:10.757 START TEST accel_xor 00:07:10.757 ************************************ 00:07:10.757 13:46:05 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:10.757 13:46:05 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:10.757 13:46:05 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:10.757 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.757 13:46:05 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:10.757 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.757 13:46:05 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:10.757 13:46:05 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:10.757 13:46:05 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.757 13:46:05 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.757 13:46:05 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.757 13:46:05 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.757 13:46:05 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.757 13:46:05 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:10.757 13:46:05 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:10.757 [2024-07-15 13:46:05.436840] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:07:10.757 [2024-07-15 13:46:05.436903] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3644385 ] 00:07:10.757 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.757 [2024-07-15 13:46:05.495236] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.016 [2024-07-15 13:46:05.601369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.016 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.017 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.017 13:46:05 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:11.017 13:46:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.017 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.017 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.017 13:46:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.017 13:46:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.017 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.017 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.017 13:46:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.017 13:46:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.017 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.017 13:46:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.392 00:07:12.392 real 0m1.437s 00:07:12.392 user 0m1.309s 00:07:12.392 sys 0m0.130s 00:07:12.392 13:46:06 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.392 13:46:06 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:12.392 ************************************ 00:07:12.392 END TEST accel_xor 00:07:12.392 ************************************ 00:07:12.392 13:46:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:12.392 13:46:06 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:12.392 13:46:06 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:12.392 13:46:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.392 13:46:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.392 ************************************ 00:07:12.392 START TEST accel_xor 00:07:12.392 ************************************ 00:07:12.392 13:46:06 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:12.392 13:46:06 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:12.392 [2024-07-15 13:46:06.924527] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:07:12.392 [2024-07-15 13:46:06.924588] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3644587 ] 00:07:12.392 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.392 [2024-07-15 13:46:06.982560] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.392 [2024-07-15 13:46:07.087563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.392 13:46:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.392 13:46:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.392 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.392 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.392 13:46:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.392 13:46:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.392 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.392 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.392 13:46:07 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.393 13:46:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.780 13:46:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.780 13:46:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.780 13:46:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.780 13:46:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.780 13:46:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.780 13:46:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.780 13:46:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.780 13:46:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.780 13:46:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.780 13:46:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.780 13:46:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.780 13:46:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.780 13:46:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.780 13:46:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.780 13:46:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.780 13:46:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.780 13:46:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.781 13:46:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.781 13:46:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.781 13:46:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.781 13:46:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.781 13:46:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.781 13:46:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.781 13:46:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.781 13:46:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.781 13:46:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:13.781 13:46:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.781 00:07:13.781 real 0m1.438s 00:07:13.781 user 0m1.299s 00:07:13.781 sys 0m0.140s 00:07:13.781 13:46:08 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.781 13:46:08 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:13.781 ************************************ 00:07:13.781 END TEST accel_xor 00:07:13.781 ************************************ 00:07:13.781 13:46:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:13.781 13:46:08 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:13.781 13:46:08 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:13.781 13:46:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.781 13:46:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.781 ************************************ 00:07:13.781 START TEST accel_dif_verify 00:07:13.781 ************************************ 00:07:13.781 13:46:08 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:13.781 13:46:08 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:13.781 13:46:08 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:13.781 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.781 13:46:08 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:13.781 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.781 13:46:08 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:13.781 13:46:08 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:13.781 13:46:08 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.781 13:46:08 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.781 13:46:08 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.781 13:46:08 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.781 13:46:08 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.781 13:46:08 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:13.781 13:46:08 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:13.781 [2024-07-15 13:46:08.408360] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:07:13.781 [2024-07-15 13:46:08.408422] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3644813 ] 00:07:13.781 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.781 [2024-07-15 13:46:08.467080] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.781 [2024-07-15 13:46:08.571205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.040 13:46:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.040 13:46:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.040 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.040 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.040 13:46:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.040 13:46:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.040 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.040 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.040 13:46:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:14.040 13:46:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.040 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.040 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.040 13:46:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.041 13:46:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:15.416 13:46:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:15.416 13:46:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:15.416 13:46:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:15.416 13:46:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:15.416 13:46:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:15.416 13:46:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:15.416 13:46:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:15.416 13:46:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:15.416 13:46:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:15.416 13:46:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:15.416 13:46:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:15.416 13:46:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:15.416 13:46:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:15.416 13:46:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:15.416 13:46:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:15.416 13:46:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:15.416 13:46:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:15.416 13:46:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:15.416 13:46:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:15.416 13:46:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:15.416 13:46:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:15.416 13:46:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:15.416 13:46:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:15.416 13:46:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:15.416 13:46:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.416 13:46:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:15.416 13:46:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.416 00:07:15.416 real 0m1.439s 00:07:15.416 user 0m1.311s 00:07:15.416 sys 0m0.131s 00:07:15.416 13:46:09 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.416 13:46:09 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:15.416 ************************************ 00:07:15.416 END TEST accel_dif_verify 00:07:15.416 ************************************ 00:07:15.416 13:46:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:15.416 13:46:09 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:15.416 13:46:09 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:15.416 13:46:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.416 13:46:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.416 ************************************ 00:07:15.416 START TEST accel_dif_generate 00:07:15.416 ************************************ 00:07:15.416 13:46:09 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:15.416 13:46:09 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:15.416 13:46:09 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:15.416 13:46:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.416 13:46:09 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:15.416 13:46:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.416 13:46:09 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:15.416 13:46:09 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:15.416 13:46:09 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.416 13:46:09 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.416 13:46:09 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.416 13:46:09 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.416 13:46:09 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.416 13:46:09 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:15.416 13:46:09 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:15.416 [2024-07-15 13:46:09.893119] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:07:15.416 [2024-07-15 13:46:09.893180] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3644977 ] 00:07:15.416 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.416 [2024-07-15 13:46:09.949325] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.416 [2024-07-15 13:46:10.057860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.416 13:46:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.416 13:46:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.416 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.416 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.416 13:46:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.417 13:46:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.794 13:46:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.794 13:46:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.794 13:46:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.794 13:46:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.794 13:46:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.794 13:46:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.794 13:46:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.794 13:46:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.794 13:46:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.794 13:46:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.794 13:46:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.794 13:46:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.794 13:46:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.794 13:46:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.794 13:46:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.794 13:46:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.794 13:46:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.794 13:46:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.794 13:46:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.794 13:46:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.794 13:46:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.794 13:46:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.794 13:46:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.794 13:46:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.794 13:46:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.794 13:46:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:16.794 13:46:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.794 00:07:16.794 real 0m1.438s 00:07:16.794 user 0m1.305s 00:07:16.794 sys 0m0.136s 00:07:16.794 13:46:11 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.794 13:46:11 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:16.794 ************************************ 00:07:16.794 END TEST accel_dif_generate 00:07:16.794 ************************************ 00:07:16.794 13:46:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:16.794 13:46:11 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:16.794 13:46:11 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:16.794 13:46:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.794 13:46:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.794 ************************************ 00:07:16.794 START TEST accel_dif_generate_copy 00:07:16.794 ************************************ 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:16.794 [2024-07-15 13:46:11.381459] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:07:16.794 [2024-07-15 13:46:11.381519] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645131 ] 00:07:16.794 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.794 [2024-07-15 13:46:11.439333] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.794 [2024-07-15 13:46:11.544765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.794 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:16.795 13:46:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.166 13:46:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:18.166 13:46:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.166 13:46:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.166 13:46:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.166 13:46:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:18.166 13:46:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.166 13:46:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.166 13:46:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.166 13:46:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:18.166 13:46:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.166 13:46:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.166 13:46:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.166 13:46:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:18.166 13:46:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.166 13:46:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.166 13:46:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.166 13:46:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:18.166 13:46:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.166 13:46:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.166 13:46:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.166 13:46:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:18.166 13:46:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.166 13:46:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.166 13:46:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.166 13:46:12 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.166 13:46:12 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:18.166 13:46:12 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.166 00:07:18.166 real 0m1.438s 00:07:18.166 user 0m1.305s 00:07:18.166 sys 0m0.134s 00:07:18.166 13:46:12 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.166 13:46:12 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:18.166 ************************************ 00:07:18.166 END TEST accel_dif_generate_copy 00:07:18.166 ************************************ 00:07:18.166 13:46:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:18.166 13:46:12 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:18.166 13:46:12 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:18.166 13:46:12 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:18.166 13:46:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.166 13:46:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.166 ************************************ 00:07:18.166 START TEST accel_comp 00:07:18.166 ************************************ 00:07:18.166 13:46:12 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:18.166 13:46:12 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:18.166 13:46:12 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:18.166 13:46:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.166 13:46:12 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:18.166 13:46:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.166 13:46:12 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:18.166 13:46:12 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:18.166 13:46:12 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.166 13:46:12 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.166 13:46:12 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.166 13:46:12 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.166 13:46:12 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.166 13:46:12 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:18.166 13:46:12 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:18.166 [2024-07-15 13:46:12.868010] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:07:18.166 [2024-07-15 13:46:12.868078] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645403 ] 00:07:18.166 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.166 [2024-07-15 13:46:12.926447] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.425 [2024-07-15 13:46:13.032325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.425 13:46:13 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.426 13:46:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.426 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.426 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.426 13:46:13 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:18.426 13:46:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.426 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.426 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.426 13:46:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.426 13:46:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.426 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.426 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.426 13:46:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.426 13:46:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.426 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.426 13:46:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.804 13:46:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.804 13:46:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.804 13:46:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.804 13:46:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.804 13:46:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.804 13:46:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.804 13:46:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.804 13:46:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.804 13:46:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.804 13:46:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.804 13:46:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.804 13:46:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.804 13:46:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.804 13:46:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.804 13:46:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.804 13:46:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.804 13:46:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.804 13:46:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.804 13:46:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.804 13:46:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.804 13:46:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.804 13:46:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.804 13:46:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.804 13:46:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.804 13:46:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.804 13:46:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:19.804 13:46:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.804 00:07:19.804 real 0m1.433s 00:07:19.804 user 0m1.303s 00:07:19.804 sys 0m0.133s 00:07:19.804 13:46:14 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.804 13:46:14 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:19.805 ************************************ 00:07:19.805 END TEST accel_comp 00:07:19.805 ************************************ 00:07:19.805 13:46:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:19.805 13:46:14 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:19.805 13:46:14 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:19.805 13:46:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.805 13:46:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.805 ************************************ 00:07:19.805 START TEST accel_decomp 00:07:19.805 ************************************ 00:07:19.805 13:46:14 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:19.805 [2024-07-15 13:46:14.353217] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:07:19.805 [2024-07-15 13:46:14.353282] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645561 ] 00:07:19.805 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.805 [2024-07-15 13:46:14.411631] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.805 [2024-07-15 13:46:14.516447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.805 13:46:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.188 13:46:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.188 13:46:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.188 13:46:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.188 13:46:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.188 13:46:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.188 13:46:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.188 13:46:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.188 13:46:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.188 13:46:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.188 13:46:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.188 13:46:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.188 13:46:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.188 13:46:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.188 13:46:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.188 13:46:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.188 13:46:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.188 13:46:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.188 13:46:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.188 13:46:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.188 13:46:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.188 13:46:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.188 13:46:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.188 13:46:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.188 13:46:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.188 13:46:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.188 13:46:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:21.188 13:46:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.188 00:07:21.188 real 0m1.428s 00:07:21.188 user 0m1.296s 00:07:21.188 sys 0m0.134s 00:07:21.188 13:46:15 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.188 13:46:15 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:21.188 ************************************ 00:07:21.188 END TEST accel_decomp 00:07:21.188 ************************************ 00:07:21.188 13:46:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.188 13:46:15 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:21.188 13:46:15 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:21.188 13:46:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.188 13:46:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.188 ************************************ 00:07:21.188 START TEST accel_decomp_full 00:07:21.188 ************************************ 00:07:21.188 13:46:15 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:21.188 13:46:15 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:21.188 13:46:15 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:21.188 13:46:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.188 13:46:15 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:21.188 13:46:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.188 13:46:15 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:21.188 13:46:15 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:21.188 13:46:15 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.188 13:46:15 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.188 13:46:15 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.188 13:46:15 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.188 13:46:15 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.188 13:46:15 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:21.188 13:46:15 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:21.188 [2024-07-15 13:46:15.828847] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:07:21.188 [2024-07-15 13:46:15.828911] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645717 ] 00:07:21.188 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.188 [2024-07-15 13:46:15.887599] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.188 [2024-07-15 13:46:15.995896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.447 13:46:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.448 13:46:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.825 13:46:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.825 13:46:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.825 13:46:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.825 13:46:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.825 13:46:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.825 13:46:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.825 13:46:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.825 13:46:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.825 13:46:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.825 13:46:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.825 13:46:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.825 13:46:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.825 13:46:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.825 13:46:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.825 13:46:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.825 13:46:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.825 13:46:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.825 13:46:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.825 13:46:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.825 13:46:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.825 13:46:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.825 13:46:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.825 13:46:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.825 13:46:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.825 13:46:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.825 13:46:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:22.825 13:46:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.825 00:07:22.825 real 0m1.446s 00:07:22.825 user 0m1.309s 00:07:22.825 sys 0m0.140s 00:07:22.825 13:46:17 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.825 13:46:17 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:22.825 ************************************ 00:07:22.825 END TEST accel_decomp_full 00:07:22.825 ************************************ 00:07:22.825 13:46:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:22.825 13:46:17 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:22.825 13:46:17 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:22.825 13:46:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.825 13:46:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.825 ************************************ 00:07:22.825 START TEST accel_decomp_mcore 00:07:22.825 ************************************ 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:22.825 [2024-07-15 13:46:17.326703] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:07:22.825 [2024-07-15 13:46:17.326778] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645991 ] 00:07:22.825 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.825 [2024-07-15 13:46:17.386107] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.825 [2024-07-15 13:46:17.492483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.825 [2024-07-15 13:46:17.492588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.825 [2024-07-15 13:46:17.492678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.825 [2024-07-15 13:46:17.492680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.825 13:46:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.202 00:07:24.202 real 0m1.451s 00:07:24.202 user 0m4.749s 00:07:24.202 sys 0m0.146s 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.202 13:46:18 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:24.202 ************************************ 00:07:24.202 END TEST accel_decomp_mcore 00:07:24.202 ************************************ 00:07:24.202 13:46:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:24.202 13:46:18 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:24.202 13:46:18 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:24.202 13:46:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.202 13:46:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.202 ************************************ 00:07:24.202 START TEST accel_decomp_full_mcore 00:07:24.202 ************************************ 00:07:24.202 13:46:18 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:24.202 13:46:18 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:24.202 13:46:18 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:24.202 13:46:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.202 13:46:18 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:24.202 13:46:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.202 13:46:18 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:24.202 13:46:18 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:24.202 13:46:18 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.202 13:46:18 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.202 13:46:18 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.202 13:46:18 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.202 13:46:18 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.202 13:46:18 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:24.203 13:46:18 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:24.203 [2024-07-15 13:46:18.825141] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:07:24.203 [2024-07-15 13:46:18.825201] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3646149 ] 00:07:24.203 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.203 [2024-07-15 13:46:18.883886] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:24.203 [2024-07-15 13:46:18.991724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.203 [2024-07-15 13:46:18.991785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.203 [2024-07-15 13:46:18.991853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.203 [2024-07-15 13:46:18.991856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.462 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.462 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.462 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.462 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.462 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.462 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.462 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.463 13:46:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.841 00:07:25.841 real 0m1.479s 00:07:25.841 user 0m0.012s 00:07:25.841 sys 0m0.002s 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.841 13:46:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:25.841 ************************************ 00:07:25.841 END TEST accel_decomp_full_mcore 00:07:25.841 ************************************ 00:07:25.841 13:46:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.841 13:46:20 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:25.841 13:46:20 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:25.841 13:46:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.841 13:46:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.841 ************************************ 00:07:25.841 START TEST accel_decomp_mthread 00:07:25.841 ************************************ 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:25.841 [2024-07-15 13:46:20.356956] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:07:25.841 [2024-07-15 13:46:20.357023] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3646316 ] 00:07:25.841 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.841 [2024-07-15 13:46:20.417325] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.841 [2024-07-15 13:46:20.522075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.841 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.842 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.842 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.842 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.842 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.842 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.842 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.842 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.842 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.842 13:46:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.224 00:07:27.224 real 0m1.446s 00:07:27.224 user 0m1.311s 00:07:27.224 sys 0m0.137s 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.224 13:46:21 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:27.224 ************************************ 00:07:27.224 END TEST accel_decomp_mthread 00:07:27.224 ************************************ 00:07:27.224 13:46:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:27.224 13:46:21 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:27.224 13:46:21 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:27.224 13:46:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.224 13:46:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.224 ************************************ 00:07:27.224 START TEST accel_decomp_full_mthread 00:07:27.224 ************************************ 00:07:27.224 13:46:21 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:27.224 13:46:21 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:27.224 13:46:21 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:27.224 13:46:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.224 13:46:21 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:27.224 13:46:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.224 13:46:21 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:27.224 13:46:21 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:27.224 13:46:21 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.224 13:46:21 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.224 13:46:21 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.224 13:46:21 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.224 13:46:21 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.224 13:46:21 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:27.224 13:46:21 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:27.224 [2024-07-15 13:46:21.848374] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:07:27.224 [2024-07-15 13:46:21.848435] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3646583 ] 00:07:27.224 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.224 [2024-07-15 13:46:21.904948] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.224 [2024-07-15 13:46:22.011937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.485 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.486 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.486 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.486 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.486 13:46:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.865 00:07:28.865 real 0m1.465s 00:07:28.865 user 0m1.325s 00:07:28.865 sys 0m0.143s 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.865 13:46:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:28.865 ************************************ 00:07:28.865 END TEST accel_decomp_full_mthread 00:07:28.865 ************************************ 00:07:28.865 13:46:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:28.865 13:46:23 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:28.865 13:46:23 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:28.865 13:46:23 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:28.865 13:46:23 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:28.865 13:46:23 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.865 13:46:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.865 13:46:23 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.865 13:46:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.865 13:46:23 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.865 13:46:23 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.865 13:46:23 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.865 13:46:23 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:28.865 13:46:23 accel -- accel/accel.sh@41 -- # jq -r . 00:07:28.865 ************************************ 00:07:28.865 START TEST accel_dif_functional_tests 00:07:28.865 ************************************ 00:07:28.866 13:46:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:28.866 [2024-07-15 13:46:23.382879] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:07:28.866 [2024-07-15 13:46:23.382939] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3646745 ] 00:07:28.866 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.866 [2024-07-15 13:46:23.439673] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:28.866 [2024-07-15 13:46:23.546572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.866 [2024-07-15 13:46:23.546675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.866 [2024-07-15 13:46:23.546684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.866 00:07:28.866 00:07:28.866 CUnit - A unit testing framework for C - Version 2.1-3 00:07:28.866 http://cunit.sourceforge.net/ 00:07:28.866 00:07:28.866 00:07:28.866 Suite: accel_dif 00:07:28.866 Test: verify: DIF generated, GUARD check ...passed 00:07:28.866 Test: verify: DIF generated, APPTAG check ...passed 00:07:28.866 Test: verify: DIF generated, REFTAG check ...passed 00:07:28.866 Test: verify: DIF not generated, GUARD check ...[2024-07-15 13:46:23.644428] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:28.866 passed 00:07:28.866 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 13:46:23.644509] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:28.866 passed 00:07:28.866 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 13:46:23.644541] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:28.866 passed 00:07:28.866 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:28.866 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 13:46:23.644602] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:28.866 passed 00:07:28.866 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:28.866 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:28.866 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:28.866 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 13:46:23.644765] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:28.866 passed 00:07:28.866 Test: verify copy: DIF generated, GUARD check ...passed 00:07:28.866 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:28.866 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:28.866 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 13:46:23.644926] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:28.866 passed 00:07:28.866 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 13:46:23.644965] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:28.866 passed 00:07:28.866 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 13:46:23.644999] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:28.866 passed 00:07:28.866 Test: generate copy: DIF generated, GUARD check ...passed 00:07:28.866 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:28.866 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:28.866 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:28.866 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:28.866 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:28.866 Test: generate copy: iovecs-len validate ...[2024-07-15 13:46:23.645233] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:28.866 passed 00:07:28.866 Test: generate copy: buffer alignment validate ...passed 00:07:28.866 00:07:28.866 Run Summary: Type Total Ran Passed Failed Inactive 00:07:28.866 suites 1 1 n/a 0 0 00:07:28.866 tests 26 26 26 0 0 00:07:28.866 asserts 115 115 115 0 n/a 00:07:28.866 00:07:28.866 Elapsed time = 0.003 seconds 00:07:29.125 00:07:29.125 real 0m0.539s 00:07:29.125 user 0m0.846s 00:07:29.125 sys 0m0.164s 00:07:29.125 13:46:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.125 13:46:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:29.125 ************************************ 00:07:29.125 END TEST accel_dif_functional_tests 00:07:29.125 ************************************ 00:07:29.125 13:46:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:29.125 00:07:29.125 real 0m32.544s 00:07:29.125 user 0m36.222s 00:07:29.125 sys 0m4.327s 00:07:29.125 13:46:23 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.125 13:46:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.125 ************************************ 00:07:29.125 END TEST accel 00:07:29.125 ************************************ 00:07:29.125 13:46:23 -- common/autotest_common.sh@1142 -- # return 0 00:07:29.125 13:46:23 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:29.125 13:46:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:29.125 13:46:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.125 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:07:29.125 ************************************ 00:07:29.125 START TEST accel_rpc 00:07:29.125 ************************************ 00:07:29.125 13:46:23 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:29.385 * Looking for test storage... 00:07:29.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:29.385 13:46:24 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:29.385 13:46:24 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3646902 00:07:29.385 13:46:24 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:29.385 13:46:24 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3646902 00:07:29.385 13:46:24 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 3646902 ']' 00:07:29.385 13:46:24 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.385 13:46:24 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:29.385 13:46:24 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.385 13:46:24 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:29.385 13:46:24 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.385 [2024-07-15 13:46:24.055362] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:07:29.385 [2024-07-15 13:46:24.055461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3646902 ] 00:07:29.385 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.385 [2024-07-15 13:46:24.113029] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.385 [2024-07-15 13:46:24.218770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.644 13:46:24 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:29.644 13:46:24 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:29.644 13:46:24 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:29.644 13:46:24 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:29.644 13:46:24 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:29.644 13:46:24 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:29.644 13:46:24 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:29.644 13:46:24 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:29.644 13:46:24 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.644 13:46:24 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.644 ************************************ 00:07:29.644 START TEST accel_assign_opcode 00:07:29.644 ************************************ 00:07:29.644 13:46:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:29.644 13:46:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:29.644 13:46:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.644 13:46:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:29.644 [2024-07-15 13:46:24.287411] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:29.644 13:46:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.644 13:46:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:29.644 13:46:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.644 13:46:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:29.644 [2024-07-15 13:46:24.295423] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:29.644 13:46:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.644 13:46:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:29.644 13:46:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.644 13:46:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:29.904 13:46:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.904 13:46:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:29.904 13:46:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:29.904 13:46:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.904 13:46:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:29.904 13:46:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:29.904 13:46:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.904 software 00:07:29.904 00:07:29.904 real 0m0.272s 00:07:29.904 user 0m0.036s 00:07:29.904 sys 0m0.005s 00:07:29.904 13:46:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.904 13:46:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:29.904 ************************************ 00:07:29.904 END TEST accel_assign_opcode 00:07:29.904 ************************************ 00:07:29.904 13:46:24 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:29.904 13:46:24 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3646902 00:07:29.904 13:46:24 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 3646902 ']' 00:07:29.904 13:46:24 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 3646902 00:07:29.904 13:46:24 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:29.904 13:46:24 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:29.904 13:46:24 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3646902 00:07:29.904 13:46:24 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:29.904 13:46:24 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:29.904 13:46:24 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3646902' 00:07:29.904 killing process with pid 3646902 00:07:29.904 13:46:24 accel_rpc -- common/autotest_common.sh@967 -- # kill 3646902 00:07:29.904 13:46:24 accel_rpc -- common/autotest_common.sh@972 -- # wait 3646902 00:07:30.470 00:07:30.470 real 0m1.093s 00:07:30.470 user 0m1.042s 00:07:30.470 sys 0m0.400s 00:07:30.470 13:46:25 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.470 13:46:25 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.470 ************************************ 00:07:30.470 END TEST accel_rpc 00:07:30.470 ************************************ 00:07:30.470 13:46:25 -- common/autotest_common.sh@1142 -- # return 0 00:07:30.470 13:46:25 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:30.470 13:46:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:30.470 13:46:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.470 13:46:25 -- common/autotest_common.sh@10 -- # set +x 00:07:30.470 ************************************ 00:07:30.470 START TEST app_cmdline 00:07:30.470 ************************************ 00:07:30.470 13:46:25 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:30.470 * Looking for test storage... 00:07:30.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:30.470 13:46:25 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:30.470 13:46:25 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3647113 00:07:30.470 13:46:25 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:30.470 13:46:25 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3647113 00:07:30.470 13:46:25 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 3647113 ']' 00:07:30.470 13:46:25 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.470 13:46:25 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:30.470 13:46:25 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.470 13:46:25 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:30.470 13:46:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:30.470 [2024-07-15 13:46:25.192134] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:07:30.470 [2024-07-15 13:46:25.192231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3647113 ] 00:07:30.470 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.470 [2024-07-15 13:46:25.248944] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.728 [2024-07-15 13:46:25.355406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.985 13:46:25 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:30.985 13:46:25 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:30.985 13:46:25 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:31.242 { 00:07:31.242 "version": "SPDK v24.09-pre git sha1 b124a6951", 00:07:31.242 "fields": { 00:07:31.242 "major": 24, 00:07:31.242 "minor": 9, 00:07:31.242 "patch": 0, 00:07:31.242 "suffix": "-pre", 00:07:31.242 "commit": "b124a6951" 00:07:31.242 } 00:07:31.242 } 00:07:31.242 13:46:25 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:31.242 13:46:25 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:31.242 13:46:25 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:31.243 13:46:25 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:31.243 13:46:25 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:31.243 13:46:25 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.243 13:46:25 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:31.243 13:46:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:31.243 13:46:25 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:31.243 13:46:25 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.243 13:46:25 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:31.243 13:46:25 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:31.243 13:46:25 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:31.243 13:46:25 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:31.243 13:46:25 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:31.243 13:46:25 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:31.243 13:46:25 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.243 13:46:25 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:31.243 13:46:25 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.243 13:46:25 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:31.243 13:46:25 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.243 13:46:25 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:31.243 13:46:25 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:31.243 13:46:25 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:31.502 request: 00:07:31.502 { 00:07:31.502 "method": "env_dpdk_get_mem_stats", 00:07:31.502 "req_id": 1 00:07:31.502 } 00:07:31.502 Got JSON-RPC error response 00:07:31.502 response: 00:07:31.502 { 00:07:31.502 "code": -32601, 00:07:31.502 "message": "Method not found" 00:07:31.502 } 00:07:31.502 13:46:26 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:31.502 13:46:26 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:31.502 13:46:26 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:31.502 13:46:26 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:31.502 13:46:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3647113 00:07:31.502 13:46:26 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 3647113 ']' 00:07:31.502 13:46:26 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 3647113 00:07:31.502 13:46:26 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:31.502 13:46:26 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:31.502 13:46:26 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3647113 00:07:31.502 13:46:26 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:31.502 13:46:26 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:31.502 13:46:26 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3647113' 00:07:31.502 killing process with pid 3647113 00:07:31.502 13:46:26 app_cmdline -- common/autotest_common.sh@967 -- # kill 3647113 00:07:31.502 13:46:26 app_cmdline -- common/autotest_common.sh@972 -- # wait 3647113 00:07:32.069 00:07:32.069 real 0m1.520s 00:07:32.069 user 0m1.852s 00:07:32.069 sys 0m0.431s 00:07:32.069 13:46:26 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.069 13:46:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:32.069 ************************************ 00:07:32.069 END TEST app_cmdline 00:07:32.069 ************************************ 00:07:32.069 13:46:26 -- common/autotest_common.sh@1142 -- # return 0 00:07:32.069 13:46:26 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:32.069 13:46:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:32.069 13:46:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.069 13:46:26 -- common/autotest_common.sh@10 -- # set +x 00:07:32.069 ************************************ 00:07:32.069 START TEST version 00:07:32.069 ************************************ 00:07:32.069 13:46:26 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:32.069 * Looking for test storage... 00:07:32.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:32.069 13:46:26 version -- app/version.sh@17 -- # get_header_version major 00:07:32.069 13:46:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:32.069 13:46:26 version -- app/version.sh@14 -- # cut -f2 00:07:32.069 13:46:26 version -- app/version.sh@14 -- # tr -d '"' 00:07:32.069 13:46:26 version -- app/version.sh@17 -- # major=24 00:07:32.069 13:46:26 version -- app/version.sh@18 -- # get_header_version minor 00:07:32.069 13:46:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:32.069 13:46:26 version -- app/version.sh@14 -- # cut -f2 00:07:32.069 13:46:26 version -- app/version.sh@14 -- # tr -d '"' 00:07:32.069 13:46:26 version -- app/version.sh@18 -- # minor=9 00:07:32.069 13:46:26 version -- app/version.sh@19 -- # get_header_version patch 00:07:32.069 13:46:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:32.069 13:46:26 version -- app/version.sh@14 -- # cut -f2 00:07:32.069 13:46:26 version -- app/version.sh@14 -- # tr -d '"' 00:07:32.069 13:46:26 version -- app/version.sh@19 -- # patch=0 00:07:32.069 13:46:26 version -- app/version.sh@20 -- # get_header_version suffix 00:07:32.069 13:46:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:32.069 13:46:26 version -- app/version.sh@14 -- # cut -f2 00:07:32.069 13:46:26 version -- app/version.sh@14 -- # tr -d '"' 00:07:32.069 13:46:26 version -- app/version.sh@20 -- # suffix=-pre 00:07:32.069 13:46:26 version -- app/version.sh@22 -- # version=24.9 00:07:32.069 13:46:26 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:32.069 13:46:26 version -- app/version.sh@28 -- # version=24.9rc0 00:07:32.069 13:46:26 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:32.069 13:46:26 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:32.069 13:46:26 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:32.069 13:46:26 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:32.069 00:07:32.069 real 0m0.112s 00:07:32.069 user 0m0.062s 00:07:32.069 sys 0m0.071s 00:07:32.069 13:46:26 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.069 13:46:26 version -- common/autotest_common.sh@10 -- # set +x 00:07:32.069 ************************************ 00:07:32.069 END TEST version 00:07:32.069 ************************************ 00:07:32.069 13:46:26 -- common/autotest_common.sh@1142 -- # return 0 00:07:32.069 13:46:26 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:32.069 13:46:26 -- spdk/autotest.sh@198 -- # uname -s 00:07:32.069 13:46:26 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:32.069 13:46:26 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:32.069 13:46:26 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:32.069 13:46:26 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:32.069 13:46:26 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:32.069 13:46:26 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:32.069 13:46:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:32.069 13:46:26 -- common/autotest_common.sh@10 -- # set +x 00:07:32.069 13:46:26 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:32.069 13:46:26 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:32.069 13:46:26 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:32.069 13:46:26 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:32.069 13:46:26 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:32.069 13:46:26 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:32.069 13:46:26 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:32.069 13:46:26 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:32.069 13:46:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.069 13:46:26 -- common/autotest_common.sh@10 -- # set +x 00:07:32.069 ************************************ 00:07:32.069 START TEST nvmf_tcp 00:07:32.069 ************************************ 00:07:32.069 13:46:26 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:32.069 * Looking for test storage... 00:07:32.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:32.069 13:46:26 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.069 13:46:26 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.069 13:46:26 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.069 13:46:26 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.069 13:46:26 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.069 13:46:26 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.069 13:46:26 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:32.069 13:46:26 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:32.069 13:46:26 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:32.327 13:46:26 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:32.327 13:46:26 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:32.327 13:46:26 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:32.327 13:46:26 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:32.327 13:46:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:32.327 13:46:26 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:32.327 13:46:26 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:32.327 13:46:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:32.327 13:46:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.327 13:46:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:32.327 ************************************ 00:07:32.327 START TEST nvmf_example 00:07:32.327 ************************************ 00:07:32.327 13:46:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:32.327 * Looking for test storage... 00:07:32.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.327 13:46:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:32.327 13:46:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:32.327 13:46:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:32.327 13:46:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:32.327 13:46:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:32.327 13:46:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:32.327 13:46:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:32.327 13:46:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:32.327 13:46:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:32.327 13:46:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:32.327 13:46:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:32.327 13:46:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:32.327 13:46:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:32.327 13:46:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:32.327 13:46:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:32.327 13:46:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:32.327 13:46:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:32.327 13:46:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:32.327 13:46:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:32.327 13:46:26 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.327 13:46:26 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.327 13:46:26 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.327 13:46:26 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.327 13:46:26 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.327 13:46:26 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.327 13:46:26 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:32.328 13:46:26 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.328 13:46:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:32.328 13:46:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:32.328 13:46:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:32.328 13:46:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:32.328 13:46:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:32.328 13:46:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:32.328 13:46:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:32.328 13:46:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:32.328 13:46:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:32.328 13:46:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:32.328 13:46:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:32.328 13:46:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:32.328 13:46:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:32.328 13:46:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:32.328 13:46:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:32.328 13:46:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:32.328 13:46:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:32.328 13:46:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:32.328 13:46:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.328 13:46:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:32.328 13:46:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:32.328 13:46:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:32.328 13:46:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:32.328 13:46:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:32.328 13:46:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:32.328 13:46:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.328 13:46:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:32.328 13:46:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.328 13:46:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:32.328 13:46:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:32.328 13:46:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:32.328 13:46:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:34.852 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:34.852 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:34.852 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:34.852 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:34.852 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:34.852 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:34.852 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:34.852 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:34.852 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:34.852 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:34.852 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:34.852 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:34.852 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:34.852 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:34.852 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:34.852 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:34.852 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:34.852 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:34.852 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:34.852 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:34.853 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:34.853 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:34.853 Found net devices under 0000:84:00.0: cvl_0_0 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:34.853 Found net devices under 0000:84:00.1: cvl_0_1 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:34.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:07:34.853 00:07:34.853 --- 10.0.0.2 ping statistics --- 00:07:34.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.853 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:34.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:07:34.853 00:07:34.853 --- 10.0.0.1 ping statistics --- 00:07:34.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.853 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3649060 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3649060 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 3649060 ']' 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:34.853 13:46:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:34.853 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.867 13:46:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:35.867 13:46:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:35.867 13:46:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:35.867 13:46:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:35.867 13:46:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:35.867 13:46:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:35.867 13:46:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.867 13:46:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:35.867 13:46:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.867 13:46:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:35.867 13:46:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.867 13:46:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:35.867 13:46:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.867 13:46:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:35.867 13:46:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:35.867 13:46:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.867 13:46:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:35.868 13:46:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.868 13:46:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:35.868 13:46:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:35.868 13:46:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.868 13:46:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:35.868 13:46:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.868 13:46:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:35.868 13:46:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.868 13:46:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:35.868 13:46:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.868 13:46:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:35.868 13:46:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:35.868 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.836 Initializing NVMe Controllers 00:07:45.836 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:45.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:45.836 Initialization complete. Launching workers. 00:07:45.836 ======================================================== 00:07:45.836 Latency(us) 00:07:45.836 Device Information : IOPS MiB/s Average min max 00:07:45.836 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15005.50 58.62 4265.88 703.20 15433.15 00:07:45.836 ======================================================== 00:07:45.836 Total : 15005.50 58.62 4265.88 703.20 15433.15 00:07:45.836 00:07:45.836 13:46:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:45.836 13:46:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:45.836 13:46:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:45.836 13:46:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:45.836 13:46:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:45.836 13:46:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:45.836 13:46:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:45.836 13:46:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:45.836 rmmod nvme_tcp 00:07:45.836 rmmod nvme_fabrics 00:07:45.836 rmmod nvme_keyring 00:07:45.836 13:46:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:45.836 13:46:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:45.836 13:46:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:45.836 13:46:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3649060 ']' 00:07:45.836 13:46:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3649060 00:07:45.836 13:46:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 3649060 ']' 00:07:45.836 13:46:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 3649060 00:07:45.836 13:46:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:45.836 13:46:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:45.836 13:46:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3649060 00:07:45.836 13:46:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:45.836 13:46:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:45.836 13:46:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3649060' 00:07:45.836 killing process with pid 3649060 00:07:45.836 13:46:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 3649060 00:07:45.836 13:46:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 3649060 00:07:46.094 nvmf threads initialize successfully 00:07:46.094 bdev subsystem init successfully 00:07:46.094 created a nvmf target service 00:07:46.094 create targets's poll groups done 00:07:46.094 all subsystems of target started 00:07:46.094 nvmf target is running 00:07:46.094 all subsystems of target stopped 00:07:46.094 destroy targets's poll groups done 00:07:46.094 destroyed the nvmf target service 00:07:46.094 bdev subsystem finish successfully 00:07:46.094 nvmf threads destroy successfully 00:07:46.094 13:46:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:46.094 13:46:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:46.094 13:46:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:46.094 13:46:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:46.094 13:46:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:46.094 13:46:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.094 13:46:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:46.094 13:46:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.632 13:46:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:48.632 13:46:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:48.632 13:46:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:48.632 13:46:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:48.632 00:07:48.632 real 0m16.017s 00:07:48.632 user 0m44.857s 00:07:48.632 sys 0m3.620s 00:07:48.632 13:46:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.632 13:46:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:48.632 ************************************ 00:07:48.632 END TEST nvmf_example 00:07:48.632 ************************************ 00:07:48.632 13:46:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:48.632 13:46:42 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:48.632 13:46:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:48.632 13:46:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.632 13:46:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:48.632 ************************************ 00:07:48.632 START TEST nvmf_filesystem 00:07:48.632 ************************************ 00:07:48.632 13:46:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:48.632 * Looking for test storage... 00:07:48.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.632 13:46:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:48.632 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:48.632 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:48.632 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:48.632 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:48.632 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:48.632 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:48.632 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:48.632 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:48.632 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:48.632 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:48.632 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:48.632 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:48.632 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:48.632 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:48.632 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:48.632 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:48.632 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:48.632 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:48.632 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:48.632 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:48.633 13:46:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:48.633 #define SPDK_CONFIG_H 00:07:48.633 #define SPDK_CONFIG_APPS 1 00:07:48.633 #define SPDK_CONFIG_ARCH native 00:07:48.633 #undef SPDK_CONFIG_ASAN 00:07:48.633 #undef SPDK_CONFIG_AVAHI 00:07:48.633 #undef SPDK_CONFIG_CET 00:07:48.633 #define SPDK_CONFIG_COVERAGE 1 00:07:48.633 #define SPDK_CONFIG_CROSS_PREFIX 00:07:48.633 #undef SPDK_CONFIG_CRYPTO 00:07:48.633 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:48.633 #undef SPDK_CONFIG_CUSTOMOCF 00:07:48.633 #undef SPDK_CONFIG_DAOS 00:07:48.633 #define SPDK_CONFIG_DAOS_DIR 00:07:48.633 #define SPDK_CONFIG_DEBUG 1 00:07:48.633 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:48.633 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:48.633 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:48.633 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:48.633 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:48.633 #undef SPDK_CONFIG_DPDK_UADK 00:07:48.633 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:48.633 #define SPDK_CONFIG_EXAMPLES 1 00:07:48.633 #undef SPDK_CONFIG_FC 00:07:48.633 #define SPDK_CONFIG_FC_PATH 00:07:48.633 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:48.633 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:48.633 #undef SPDK_CONFIG_FUSE 00:07:48.633 #undef SPDK_CONFIG_FUZZER 00:07:48.633 #define SPDK_CONFIG_FUZZER_LIB 00:07:48.633 #undef SPDK_CONFIG_GOLANG 00:07:48.633 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:48.633 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:48.633 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:48.633 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:48.633 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:48.633 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:48.633 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:48.633 #define SPDK_CONFIG_IDXD 1 00:07:48.633 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:48.633 #undef SPDK_CONFIG_IPSEC_MB 00:07:48.633 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:48.633 #define SPDK_CONFIG_ISAL 1 00:07:48.633 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:48.633 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:48.633 #define SPDK_CONFIG_LIBDIR 00:07:48.633 #undef SPDK_CONFIG_LTO 00:07:48.633 #define SPDK_CONFIG_MAX_LCORES 128 00:07:48.633 #define SPDK_CONFIG_NVME_CUSE 1 00:07:48.633 #undef SPDK_CONFIG_OCF 00:07:48.633 #define SPDK_CONFIG_OCF_PATH 00:07:48.633 #define SPDK_CONFIG_OPENSSL_PATH 00:07:48.633 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:48.633 #define SPDK_CONFIG_PGO_DIR 00:07:48.633 #undef SPDK_CONFIG_PGO_USE 00:07:48.633 #define SPDK_CONFIG_PREFIX /usr/local 00:07:48.633 #undef SPDK_CONFIG_RAID5F 00:07:48.633 #undef SPDK_CONFIG_RBD 00:07:48.633 #define SPDK_CONFIG_RDMA 1 00:07:48.633 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:48.633 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:48.633 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:48.634 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:48.634 #define SPDK_CONFIG_SHARED 1 00:07:48.634 #undef SPDK_CONFIG_SMA 00:07:48.634 #define SPDK_CONFIG_TESTS 1 00:07:48.634 #undef SPDK_CONFIG_TSAN 00:07:48.634 #define SPDK_CONFIG_UBLK 1 00:07:48.634 #define SPDK_CONFIG_UBSAN 1 00:07:48.634 #undef SPDK_CONFIG_UNIT_TESTS 00:07:48.634 #undef SPDK_CONFIG_URING 00:07:48.634 #define SPDK_CONFIG_URING_PATH 00:07:48.634 #undef SPDK_CONFIG_URING_ZNS 00:07:48.634 #undef SPDK_CONFIG_USDT 00:07:48.634 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:48.634 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:48.634 #define SPDK_CONFIG_VFIO_USER 1 00:07:48.634 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:48.634 #define SPDK_CONFIG_VHOST 1 00:07:48.634 #define SPDK_CONFIG_VIRTIO 1 00:07:48.634 #undef SPDK_CONFIG_VTUNE 00:07:48.634 #define SPDK_CONFIG_VTUNE_DIR 00:07:48.634 #define SPDK_CONFIG_WERROR 1 00:07:48.634 #define SPDK_CONFIG_WPDK_DIR 00:07:48.634 #undef SPDK_CONFIG_XNVME 00:07:48.634 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:48.634 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:48.635 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 3650765 ]] 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 3650765 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.ymCxH6 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ymCxH6/tests/target /tmp/spdk.ymCxH6 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=949354496 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4335075328 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=38812233728 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=45083312128 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6271078400 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=22538280960 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=22541656064 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=9007878144 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=9016664064 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8785920 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=22541004800 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=22541656064 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=651264 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:48.636 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4508323840 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4508327936 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:48.637 * Looking for test storage... 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=38812233728 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8485670912 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:48.637 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:48.638 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:48.638 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.638 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.638 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.638 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:48.638 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:48.638 13:46:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:48.638 13:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:50.538 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.538 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:50.539 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:50.539 Found net devices under 0000:84:00.0: cvl_0_0 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:50.539 Found net devices under 0000:84:00.1: cvl_0_1 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:50.539 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:50.796 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:50.796 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:50.796 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:50.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:07:50.796 00:07:50.796 --- 10.0.0.2 ping statistics --- 00:07:50.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.796 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:07:50.796 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:50.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:07:50.797 00:07:50.797 --- 10.0.0.1 ping statistics --- 00:07:50.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.797 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:07:50.797 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.797 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:50.797 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:50.797 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.797 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:50.797 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:50.797 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.797 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:50.797 13:46:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:50.797 13:46:45 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:50.797 13:46:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:50.797 13:46:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.797 13:46:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:50.797 ************************************ 00:07:50.797 START TEST nvmf_filesystem_no_in_capsule 00:07:50.797 ************************************ 00:07:50.797 13:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:50.797 13:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:50.797 13:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:50.797 13:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:50.797 13:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:50.797 13:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.797 13:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3652413 00:07:50.797 13:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:50.797 13:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3652413 00:07:50.797 13:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3652413 ']' 00:07:50.797 13:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.797 13:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:50.797 13:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.797 13:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:50.797 13:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.797 [2024-07-15 13:46:45.507226] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:07:50.797 [2024-07-15 13:46:45.507322] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.797 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.797 [2024-07-15 13:46:45.571668] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:51.056 [2024-07-15 13:46:45.688199] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.056 [2024-07-15 13:46:45.688251] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.056 [2024-07-15 13:46:45.688274] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.056 [2024-07-15 13:46:45.688284] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.056 [2024-07-15 13:46:45.688293] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.056 [2024-07-15 13:46:45.688380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.056 [2024-07-15 13:46:45.688480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.056 [2024-07-15 13:46:45.688571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.056 [2024-07-15 13:46:45.688578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.056 13:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.056 13:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:51.056 13:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:51.056 13:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:51.056 13:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.056 13:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.056 13:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:51.056 13:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:51.056 13:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.056 13:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.056 [2024-07-15 13:46:45.848713] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.056 13:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.056 13:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:51.056 13:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.056 13:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.314 Malloc1 00:07:51.314 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.314 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:51.314 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.314 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.314 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.314 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:51.314 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.314 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.314 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.314 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:51.314 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.314 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.314 [2024-07-15 13:46:46.034419] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.314 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.315 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:51.315 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:51.315 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:51.315 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:51.315 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:51.315 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:51.315 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.315 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.315 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.315 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:51.315 { 00:07:51.315 "name": "Malloc1", 00:07:51.315 "aliases": [ 00:07:51.315 "2afe7729-9716-4fa6-acca-2f211646c1f4" 00:07:51.315 ], 00:07:51.315 "product_name": "Malloc disk", 00:07:51.315 "block_size": 512, 00:07:51.315 "num_blocks": 1048576, 00:07:51.315 "uuid": "2afe7729-9716-4fa6-acca-2f211646c1f4", 00:07:51.315 "assigned_rate_limits": { 00:07:51.315 "rw_ios_per_sec": 0, 00:07:51.315 "rw_mbytes_per_sec": 0, 00:07:51.315 "r_mbytes_per_sec": 0, 00:07:51.315 "w_mbytes_per_sec": 0 00:07:51.315 }, 00:07:51.315 "claimed": true, 00:07:51.315 "claim_type": "exclusive_write", 00:07:51.315 "zoned": false, 00:07:51.315 "supported_io_types": { 00:07:51.315 "read": true, 00:07:51.315 "write": true, 00:07:51.315 "unmap": true, 00:07:51.315 "flush": true, 00:07:51.315 "reset": true, 00:07:51.315 "nvme_admin": false, 00:07:51.315 "nvme_io": false, 00:07:51.315 "nvme_io_md": false, 00:07:51.315 "write_zeroes": true, 00:07:51.315 "zcopy": true, 00:07:51.315 "get_zone_info": false, 00:07:51.315 "zone_management": false, 00:07:51.315 "zone_append": false, 00:07:51.315 "compare": false, 00:07:51.315 "compare_and_write": false, 00:07:51.315 "abort": true, 00:07:51.315 "seek_hole": false, 00:07:51.315 "seek_data": false, 00:07:51.315 "copy": true, 00:07:51.315 "nvme_iov_md": false 00:07:51.315 }, 00:07:51.315 "memory_domains": [ 00:07:51.315 { 00:07:51.315 "dma_device_id": "system", 00:07:51.315 "dma_device_type": 1 00:07:51.315 }, 00:07:51.315 { 00:07:51.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.315 "dma_device_type": 2 00:07:51.315 } 00:07:51.315 ], 00:07:51.315 "driver_specific": {} 00:07:51.315 } 00:07:51.315 ]' 00:07:51.315 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:51.315 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:51.315 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:51.315 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:51.315 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:51.315 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:51.315 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:51.315 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:52.253 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:52.253 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:52.253 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:52.253 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:52.253 13:46:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:54.158 13:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:54.158 13:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:54.158 13:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:54.158 13:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:54.158 13:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:54.158 13:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:54.158 13:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:54.158 13:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:54.158 13:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:54.158 13:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:54.158 13:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:54.158 13:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:54.158 13:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:54.158 13:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:54.158 13:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:54.158 13:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:54.158 13:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:54.416 13:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:54.697 13:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:56.075 13:46:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:56.075 13:46:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:56.075 13:46:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:56.075 13:46:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.075 13:46:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.075 ************************************ 00:07:56.075 START TEST filesystem_ext4 00:07:56.075 ************************************ 00:07:56.075 13:46:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:56.075 13:46:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:56.075 13:46:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:56.076 13:46:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:56.076 13:46:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:56.076 13:46:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:56.076 13:46:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:56.076 13:46:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:56.076 13:46:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:56.076 13:46:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:56.076 13:46:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:56.076 mke2fs 1.46.5 (30-Dec-2021) 00:07:56.076 Discarding device blocks: 0/522240 done 00:07:56.076 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:56.076 Filesystem UUID: bc29911b-965d-45dd-8081-bd28bf34da33 00:07:56.076 Superblock backups stored on blocks: 00:07:56.076 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:56.076 00:07:56.076 Allocating group tables: 0/64 done 00:07:56.076 Writing inode tables: 0/64 done 00:07:57.014 Creating journal (8192 blocks): done 00:07:58.102 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:07:58.102 00:07:58.102 13:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:58.102 13:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:58.102 13:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:58.102 13:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:58.102 13:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:58.102 13:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:58.102 13:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:58.102 13:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:58.362 13:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3652413 00:07:58.362 13:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:58.362 13:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:58.362 13:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:58.362 13:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:58.362 00:07:58.362 real 0m2.441s 00:07:58.362 user 0m0.023s 00:07:58.362 sys 0m0.055s 00:07:58.362 13:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.362 13:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:58.362 ************************************ 00:07:58.362 END TEST filesystem_ext4 00:07:58.362 ************************************ 00:07:58.362 13:46:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:58.362 13:46:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:58.362 13:46:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:58.362 13:46:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.362 13:46:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.363 ************************************ 00:07:58.363 START TEST filesystem_btrfs 00:07:58.363 ************************************ 00:07:58.363 13:46:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:58.363 13:46:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:58.363 13:46:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:58.363 13:46:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:58.363 13:46:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:58.363 13:46:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:58.363 13:46:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:58.363 13:46:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:58.363 13:46:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:58.363 13:46:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:58.363 13:46:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:58.620 btrfs-progs v6.6.2 00:07:58.620 See https://btrfs.readthedocs.io for more information. 00:07:58.620 00:07:58.620 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:58.620 NOTE: several default settings have changed in version 5.15, please make sure 00:07:58.620 this does not affect your deployments: 00:07:58.620 - DUP for metadata (-m dup) 00:07:58.620 - enabled no-holes (-O no-holes) 00:07:58.620 - enabled free-space-tree (-R free-space-tree) 00:07:58.620 00:07:58.620 Label: (null) 00:07:58.620 UUID: 2047b0e8-c901-440d-a27f-db682d27a623 00:07:58.620 Node size: 16384 00:07:58.620 Sector size: 4096 00:07:58.620 Filesystem size: 510.00MiB 00:07:58.620 Block group profiles: 00:07:58.620 Data: single 8.00MiB 00:07:58.620 Metadata: DUP 32.00MiB 00:07:58.620 System: DUP 8.00MiB 00:07:58.620 SSD detected: yes 00:07:58.620 Zoned device: no 00:07:58.620 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:58.620 Runtime features: free-space-tree 00:07:58.621 Checksum: crc32c 00:07:58.621 Number of devices: 1 00:07:58.621 Devices: 00:07:58.621 ID SIZE PATH 00:07:58.621 1 510.00MiB /dev/nvme0n1p1 00:07:58.621 00:07:58.621 13:46:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:58.621 13:46:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:59.556 13:46:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:59.556 13:46:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:59.556 13:46:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:59.556 13:46:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:59.556 13:46:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:59.556 13:46:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:59.556 13:46:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3652413 00:07:59.556 13:46:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:59.556 13:46:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:59.556 13:46:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:59.556 13:46:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:59.556 00:07:59.556 real 0m1.231s 00:07:59.556 user 0m0.012s 00:07:59.556 sys 0m0.127s 00:07:59.556 13:46:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.556 13:46:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:59.556 ************************************ 00:07:59.556 END TEST filesystem_btrfs 00:07:59.556 ************************************ 00:07:59.556 13:46:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:59.556 13:46:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:59.556 13:46:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:59.556 13:46:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.556 13:46:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.556 ************************************ 00:07:59.556 START TEST filesystem_xfs 00:07:59.556 ************************************ 00:07:59.556 13:46:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:59.556 13:46:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:59.556 13:46:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:59.556 13:46:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:59.556 13:46:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:59.556 13:46:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:59.556 13:46:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:59.556 13:46:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:59.556 13:46:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:59.556 13:46:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:59.556 13:46:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:59.814 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:59.814 = sectsz=512 attr=2, projid32bit=1 00:07:59.814 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:59.814 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:59.814 data = bsize=4096 blocks=130560, imaxpct=25 00:07:59.814 = sunit=0 swidth=0 blks 00:07:59.814 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:59.814 log =internal log bsize=4096 blocks=16384, version=2 00:07:59.814 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:59.814 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:00.381 Discarding blocks...Done. 00:08:00.381 13:46:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:00.381 13:46:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:02.929 13:46:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:02.929 13:46:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:02.929 13:46:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:02.929 13:46:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:02.929 13:46:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:02.929 13:46:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:03.186 13:46:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3652413 00:08:03.186 13:46:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:03.186 13:46:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:03.186 13:46:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:03.186 13:46:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:03.186 00:08:03.186 real 0m3.497s 00:08:03.186 user 0m0.012s 00:08:03.186 sys 0m0.070s 00:08:03.186 13:46:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.186 13:46:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:03.186 ************************************ 00:08:03.186 END TEST filesystem_xfs 00:08:03.186 ************************************ 00:08:03.186 13:46:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:03.186 13:46:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:03.445 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:03.445 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:03.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:03.445 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:03.445 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:03.445 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:03.445 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:03.445 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:03.445 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:03.445 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:03.445 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:03.445 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.445 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.445 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.445 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:03.445 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3652413 00:08:03.445 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3652413 ']' 00:08:03.445 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3652413 00:08:03.445 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:03.445 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:03.445 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3652413 00:08:03.445 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:03.445 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:03.445 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3652413' 00:08:03.445 killing process with pid 3652413 00:08:03.445 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 3652413 00:08:03.445 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 3652413 00:08:04.012 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:04.012 00:08:04.012 real 0m13.182s 00:08:04.012 user 0m50.511s 00:08:04.012 sys 0m1.942s 00:08:04.012 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.012 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.012 ************************************ 00:08:04.012 END TEST nvmf_filesystem_no_in_capsule 00:08:04.012 ************************************ 00:08:04.012 13:46:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:04.012 13:46:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:04.012 13:46:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:04.012 13:46:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.012 13:46:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.012 ************************************ 00:08:04.012 START TEST nvmf_filesystem_in_capsule 00:08:04.012 ************************************ 00:08:04.013 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:08:04.013 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:04.013 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:04.013 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:04.013 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:04.013 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.013 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3654226 00:08:04.013 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:04.013 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3654226 00:08:04.013 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3654226 ']' 00:08:04.013 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.013 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:04.013 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.013 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:04.013 13:46:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.013 [2024-07-15 13:46:58.744747] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:08:04.013 [2024-07-15 13:46:58.744830] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.013 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.013 [2024-07-15 13:46:58.807535] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:04.271 [2024-07-15 13:46:58.908142] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.271 [2024-07-15 13:46:58.908199] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.271 [2024-07-15 13:46:58.908227] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:04.271 [2024-07-15 13:46:58.908239] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:04.271 [2024-07-15 13:46:58.908254] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.271 [2024-07-15 13:46:58.908381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.271 [2024-07-15 13:46:58.908489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.271 [2024-07-15 13:46:58.908584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.271 [2024-07-15 13:46:58.908587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.271 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:04.271 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:04.271 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:04.271 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:04.271 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.272 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.272 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:04.272 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:04.272 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.272 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.272 [2024-07-15 13:46:59.063560] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.272 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.272 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:04.272 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.272 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.530 Malloc1 00:08:04.530 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.530 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:04.530 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.530 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.530 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.530 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:04.530 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.530 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.530 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.531 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:04.531 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.531 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.531 [2024-07-15 13:46:59.247113] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:04.531 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.531 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:04.531 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:04.531 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:04.531 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:04.531 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:04.531 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:04.531 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.531 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.531 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.531 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:04.531 { 00:08:04.531 "name": "Malloc1", 00:08:04.531 "aliases": [ 00:08:04.531 "2f7f2286-3c82-427b-b46a-8dc887aadc62" 00:08:04.531 ], 00:08:04.531 "product_name": "Malloc disk", 00:08:04.531 "block_size": 512, 00:08:04.531 "num_blocks": 1048576, 00:08:04.531 "uuid": "2f7f2286-3c82-427b-b46a-8dc887aadc62", 00:08:04.531 "assigned_rate_limits": { 00:08:04.531 "rw_ios_per_sec": 0, 00:08:04.531 "rw_mbytes_per_sec": 0, 00:08:04.531 "r_mbytes_per_sec": 0, 00:08:04.531 "w_mbytes_per_sec": 0 00:08:04.531 }, 00:08:04.531 "claimed": true, 00:08:04.531 "claim_type": "exclusive_write", 00:08:04.531 "zoned": false, 00:08:04.531 "supported_io_types": { 00:08:04.531 "read": true, 00:08:04.531 "write": true, 00:08:04.531 "unmap": true, 00:08:04.531 "flush": true, 00:08:04.531 "reset": true, 00:08:04.531 "nvme_admin": false, 00:08:04.531 "nvme_io": false, 00:08:04.531 "nvme_io_md": false, 00:08:04.531 "write_zeroes": true, 00:08:04.531 "zcopy": true, 00:08:04.531 "get_zone_info": false, 00:08:04.531 "zone_management": false, 00:08:04.531 "zone_append": false, 00:08:04.531 "compare": false, 00:08:04.531 "compare_and_write": false, 00:08:04.531 "abort": true, 00:08:04.531 "seek_hole": false, 00:08:04.531 "seek_data": false, 00:08:04.531 "copy": true, 00:08:04.531 "nvme_iov_md": false 00:08:04.531 }, 00:08:04.531 "memory_domains": [ 00:08:04.531 { 00:08:04.531 "dma_device_id": "system", 00:08:04.531 "dma_device_type": 1 00:08:04.531 }, 00:08:04.531 { 00:08:04.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.531 "dma_device_type": 2 00:08:04.531 } 00:08:04.531 ], 00:08:04.531 "driver_specific": {} 00:08:04.531 } 00:08:04.531 ]' 00:08:04.531 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:04.531 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:04.531 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:04.531 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:04.531 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:04.531 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:04.531 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:04.531 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:05.097 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:05.097 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:05.097 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:05.097 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:05.097 13:46:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:07.628 13:47:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:07.628 13:47:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:07.628 13:47:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:07.628 13:47:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:07.628 13:47:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:07.628 13:47:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:07.628 13:47:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:07.628 13:47:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:07.628 13:47:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:07.628 13:47:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:07.628 13:47:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:07.628 13:47:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:07.628 13:47:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:07.628 13:47:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:07.628 13:47:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:07.628 13:47:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:07.628 13:47:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:07.628 13:47:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:07.886 13:47:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:09.290 13:47:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:09.290 13:47:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:09.290 13:47:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:09.290 13:47:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.290 13:47:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.290 ************************************ 00:08:09.290 START TEST filesystem_in_capsule_ext4 00:08:09.290 ************************************ 00:08:09.290 13:47:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:09.290 13:47:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:09.290 13:47:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:09.290 13:47:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:09.290 13:47:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:09.290 13:47:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:09.290 13:47:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:09.290 13:47:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:09.290 13:47:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:09.290 13:47:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:09.290 13:47:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:09.290 mke2fs 1.46.5 (30-Dec-2021) 00:08:09.290 Discarding device blocks: 0/522240 done 00:08:09.290 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:09.290 Filesystem UUID: ad368e56-ae33-498a-beec-aa54e7e7c8e0 00:08:09.290 Superblock backups stored on blocks: 00:08:09.290 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:09.290 00:08:09.290 Allocating group tables: 0/64 done 00:08:09.290 Writing inode tables: 0/64 done 00:08:09.290 Creating journal (8192 blocks): done 00:08:09.290 Writing superblocks and filesystem accounting information: 0/64 done 00:08:09.290 00:08:09.290 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:09.290 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:09.549 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:09.549 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:09.549 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:09.549 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:09.549 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:09.549 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:09.549 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3654226 00:08:09.549 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:09.549 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:09.549 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:09.549 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:09.549 00:08:09.549 real 0m0.568s 00:08:09.549 user 0m0.020s 00:08:09.549 sys 0m0.057s 00:08:09.549 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.549 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:09.549 ************************************ 00:08:09.549 END TEST filesystem_in_capsule_ext4 00:08:09.549 ************************************ 00:08:09.549 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:09.549 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:09.549 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:09.549 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.549 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.549 ************************************ 00:08:09.549 START TEST filesystem_in_capsule_btrfs 00:08:09.549 ************************************ 00:08:09.549 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:09.549 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:09.549 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:09.549 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:09.549 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:09.549 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:09.549 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:09.549 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:09.549 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:09.549 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:09.549 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:10.135 btrfs-progs v6.6.2 00:08:10.135 See https://btrfs.readthedocs.io for more information. 00:08:10.135 00:08:10.135 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:10.135 NOTE: several default settings have changed in version 5.15, please make sure 00:08:10.135 this does not affect your deployments: 00:08:10.135 - DUP for metadata (-m dup) 00:08:10.135 - enabled no-holes (-O no-holes) 00:08:10.135 - enabled free-space-tree (-R free-space-tree) 00:08:10.135 00:08:10.135 Label: (null) 00:08:10.135 UUID: 95cedd61-0707-4f42-a742-9e516a9ab2d8 00:08:10.135 Node size: 16384 00:08:10.135 Sector size: 4096 00:08:10.135 Filesystem size: 510.00MiB 00:08:10.135 Block group profiles: 00:08:10.135 Data: single 8.00MiB 00:08:10.135 Metadata: DUP 32.00MiB 00:08:10.135 System: DUP 8.00MiB 00:08:10.135 SSD detected: yes 00:08:10.135 Zoned device: no 00:08:10.135 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:10.135 Runtime features: free-space-tree 00:08:10.135 Checksum: crc32c 00:08:10.135 Number of devices: 1 00:08:10.135 Devices: 00:08:10.135 ID SIZE PATH 00:08:10.135 1 510.00MiB /dev/nvme0n1p1 00:08:10.135 00:08:10.135 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:10.135 13:47:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:11.069 13:47:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:11.069 13:47:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:11.069 13:47:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:11.069 13:47:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:11.069 13:47:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:11.069 13:47:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:11.329 13:47:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3654226 00:08:11.329 13:47:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:11.329 13:47:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:11.329 13:47:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:11.329 13:47:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:11.329 00:08:11.329 real 0m1.563s 00:08:11.329 user 0m0.016s 00:08:11.329 sys 0m0.121s 00:08:11.329 13:47:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.329 13:47:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:11.329 ************************************ 00:08:11.329 END TEST filesystem_in_capsule_btrfs 00:08:11.329 ************************************ 00:08:11.329 13:47:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:11.329 13:47:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:11.329 13:47:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:11.329 13:47:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.329 13:47:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:11.329 ************************************ 00:08:11.329 START TEST filesystem_in_capsule_xfs 00:08:11.329 ************************************ 00:08:11.329 13:47:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:11.329 13:47:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:11.329 13:47:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:11.329 13:47:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:11.329 13:47:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:11.329 13:47:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:11.329 13:47:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:11.329 13:47:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:11.329 13:47:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:11.329 13:47:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:11.329 13:47:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:11.329 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:11.329 = sectsz=512 attr=2, projid32bit=1 00:08:11.329 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:11.329 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:11.329 data = bsize=4096 blocks=130560, imaxpct=25 00:08:11.330 = sunit=0 swidth=0 blks 00:08:11.330 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:11.330 log =internal log bsize=4096 blocks=16384, version=2 00:08:11.330 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:11.330 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:12.264 Discarding blocks...Done. 00:08:12.264 13:47:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:12.264 13:47:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:14.792 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:14.792 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:14.792 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:14.792 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:14.792 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:14.792 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:14.792 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3654226 00:08:14.792 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:14.792 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:14.792 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:14.792 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:14.792 00:08:14.792 real 0m3.383s 00:08:14.792 user 0m0.016s 00:08:14.792 sys 0m0.059s 00:08:14.792 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.792 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:14.792 ************************************ 00:08:14.792 END TEST filesystem_in_capsule_xfs 00:08:14.792 ************************************ 00:08:14.792 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:14.792 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:15.051 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:15.051 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:15.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.051 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:15.051 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:15.051 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:15.051 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:15.051 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:15.051 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:15.051 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:15.052 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:15.052 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.052 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:15.052 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.052 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:15.052 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3654226 00:08:15.052 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3654226 ']' 00:08:15.052 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3654226 00:08:15.052 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:15.052 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:15.052 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3654226 00:08:15.052 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:15.052 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:15.052 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3654226' 00:08:15.052 killing process with pid 3654226 00:08:15.052 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 3654226 00:08:15.052 13:47:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 3654226 00:08:15.618 13:47:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:15.618 00:08:15.618 real 0m11.605s 00:08:15.618 user 0m44.372s 00:08:15.618 sys 0m1.801s 00:08:15.618 13:47:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.618 13:47:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:15.618 ************************************ 00:08:15.618 END TEST nvmf_filesystem_in_capsule 00:08:15.618 ************************************ 00:08:15.618 13:47:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:15.618 13:47:10 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:15.618 13:47:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:15.618 13:47:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:15.618 13:47:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:15.618 13:47:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:15.618 13:47:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:15.618 13:47:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:15.618 rmmod nvme_tcp 00:08:15.618 rmmod nvme_fabrics 00:08:15.618 rmmod nvme_keyring 00:08:15.618 13:47:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:15.618 13:47:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:15.618 13:47:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:15.618 13:47:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:15.618 13:47:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:15.618 13:47:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:15.618 13:47:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:15.618 13:47:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:15.618 13:47:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:15.618 13:47:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.618 13:47:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.618 13:47:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.154 13:47:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:18.154 00:08:18.154 real 0m29.423s 00:08:18.154 user 1m35.849s 00:08:18.154 sys 0m5.417s 00:08:18.154 13:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.154 13:47:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:18.154 ************************************ 00:08:18.155 END TEST nvmf_filesystem 00:08:18.155 ************************************ 00:08:18.155 13:47:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:18.155 13:47:12 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:18.155 13:47:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:18.155 13:47:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.155 13:47:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:18.155 ************************************ 00:08:18.155 START TEST nvmf_target_discovery 00:08:18.155 ************************************ 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:18.155 * Looking for test storage... 00:08:18.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:18.155 13:47:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:20.058 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:20.058 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:20.058 Found net devices under 0000:84:00.0: cvl_0_0 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:20.058 Found net devices under 0000:84:00.1: cvl_0_1 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:20.058 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:20.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:20.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:08:20.059 00:08:20.059 --- 10.0.0.2 ping statistics --- 00:08:20.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.059 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:20.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:20.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:08:20.059 00:08:20.059 --- 10.0.0.1 ping statistics --- 00:08:20.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.059 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3657724 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3657724 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 3657724 ']' 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:20.059 13:47:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.059 [2024-07-15 13:47:14.893675] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:08:20.059 [2024-07-15 13:47:14.893799] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.317 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.317 [2024-07-15 13:47:14.958179] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:20.317 [2024-07-15 13:47:15.068145] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:20.317 [2024-07-15 13:47:15.068209] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:20.317 [2024-07-15 13:47:15.068222] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:20.317 [2024-07-15 13:47:15.068233] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:20.317 [2024-07-15 13:47:15.068242] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:20.317 [2024-07-15 13:47:15.068332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.317 [2024-07-15 13:47:15.068398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.317 [2024-07-15 13:47:15.068465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.317 [2024-07-15 13:47:15.068468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.575 [2024-07-15 13:47:15.223791] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.575 Null1 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.575 [2024-07-15 13:47:15.264089] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.575 Null2 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.575 Null3 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.575 Null4 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.575 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:20.576 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.576 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.576 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.576 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:08:20.836 00:08:20.836 Discovery Log Number of Records 6, Generation counter 6 00:08:20.836 =====Discovery Log Entry 0====== 00:08:20.836 trtype: tcp 00:08:20.837 adrfam: ipv4 00:08:20.837 subtype: current discovery subsystem 00:08:20.837 treq: not required 00:08:20.837 portid: 0 00:08:20.837 trsvcid: 4420 00:08:20.837 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:20.837 traddr: 10.0.0.2 00:08:20.837 eflags: explicit discovery connections, duplicate discovery information 00:08:20.837 sectype: none 00:08:20.837 =====Discovery Log Entry 1====== 00:08:20.837 trtype: tcp 00:08:20.837 adrfam: ipv4 00:08:20.837 subtype: nvme subsystem 00:08:20.837 treq: not required 00:08:20.837 portid: 0 00:08:20.837 trsvcid: 4420 00:08:20.837 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:20.837 traddr: 10.0.0.2 00:08:20.837 eflags: none 00:08:20.837 sectype: none 00:08:20.837 =====Discovery Log Entry 2====== 00:08:20.837 trtype: tcp 00:08:20.837 adrfam: ipv4 00:08:20.837 subtype: nvme subsystem 00:08:20.837 treq: not required 00:08:20.837 portid: 0 00:08:20.837 trsvcid: 4420 00:08:20.837 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:20.837 traddr: 10.0.0.2 00:08:20.837 eflags: none 00:08:20.837 sectype: none 00:08:20.837 =====Discovery Log Entry 3====== 00:08:20.837 trtype: tcp 00:08:20.837 adrfam: ipv4 00:08:20.837 subtype: nvme subsystem 00:08:20.837 treq: not required 00:08:20.837 portid: 0 00:08:20.837 trsvcid: 4420 00:08:20.837 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:20.837 traddr: 10.0.0.2 00:08:20.837 eflags: none 00:08:20.837 sectype: none 00:08:20.837 =====Discovery Log Entry 4====== 00:08:20.837 trtype: tcp 00:08:20.837 adrfam: ipv4 00:08:20.837 subtype: nvme subsystem 00:08:20.837 treq: not required 00:08:20.837 portid: 0 00:08:20.837 trsvcid: 4420 00:08:20.837 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:20.837 traddr: 10.0.0.2 00:08:20.837 eflags: none 00:08:20.837 sectype: none 00:08:20.837 =====Discovery Log Entry 5====== 00:08:20.837 trtype: tcp 00:08:20.837 adrfam: ipv4 00:08:20.837 subtype: discovery subsystem referral 00:08:20.837 treq: not required 00:08:20.837 portid: 0 00:08:20.837 trsvcid: 4430 00:08:20.837 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:20.837 traddr: 10.0.0.2 00:08:20.837 eflags: none 00:08:20.837 sectype: none 00:08:20.837 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:20.837 Perform nvmf subsystem discovery via RPC 00:08:20.837 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:20.837 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.837 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.837 [ 00:08:20.837 { 00:08:20.837 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:20.837 "subtype": "Discovery", 00:08:20.837 "listen_addresses": [ 00:08:20.837 { 00:08:20.837 "trtype": "TCP", 00:08:20.837 "adrfam": "IPv4", 00:08:20.837 "traddr": "10.0.0.2", 00:08:20.837 "trsvcid": "4420" 00:08:20.837 } 00:08:20.837 ], 00:08:20.837 "allow_any_host": true, 00:08:20.837 "hosts": [] 00:08:20.837 }, 00:08:20.837 { 00:08:20.837 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:20.837 "subtype": "NVMe", 00:08:20.837 "listen_addresses": [ 00:08:20.837 { 00:08:20.837 "trtype": "TCP", 00:08:20.837 "adrfam": "IPv4", 00:08:20.837 "traddr": "10.0.0.2", 00:08:20.837 "trsvcid": "4420" 00:08:20.837 } 00:08:20.837 ], 00:08:20.837 "allow_any_host": true, 00:08:20.837 "hosts": [], 00:08:20.837 "serial_number": "SPDK00000000000001", 00:08:20.837 "model_number": "SPDK bdev Controller", 00:08:20.837 "max_namespaces": 32, 00:08:20.837 "min_cntlid": 1, 00:08:20.837 "max_cntlid": 65519, 00:08:20.837 "namespaces": [ 00:08:20.837 { 00:08:20.837 "nsid": 1, 00:08:20.837 "bdev_name": "Null1", 00:08:20.837 "name": "Null1", 00:08:20.837 "nguid": "0D9432D2AF07497488A62574C4A8C537", 00:08:20.837 "uuid": "0d9432d2-af07-4974-88a6-2574c4a8c537" 00:08:20.837 } 00:08:20.837 ] 00:08:20.837 }, 00:08:20.837 { 00:08:20.837 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:20.837 "subtype": "NVMe", 00:08:20.837 "listen_addresses": [ 00:08:20.837 { 00:08:20.837 "trtype": "TCP", 00:08:20.837 "adrfam": "IPv4", 00:08:20.837 "traddr": "10.0.0.2", 00:08:20.837 "trsvcid": "4420" 00:08:20.837 } 00:08:20.837 ], 00:08:20.837 "allow_any_host": true, 00:08:20.837 "hosts": [], 00:08:20.837 "serial_number": "SPDK00000000000002", 00:08:20.837 "model_number": "SPDK bdev Controller", 00:08:20.837 "max_namespaces": 32, 00:08:20.837 "min_cntlid": 1, 00:08:20.837 "max_cntlid": 65519, 00:08:20.837 "namespaces": [ 00:08:20.837 { 00:08:20.837 "nsid": 1, 00:08:20.837 "bdev_name": "Null2", 00:08:20.837 "name": "Null2", 00:08:20.837 "nguid": "0ECFDE7345394B13808D15CB173998D7", 00:08:20.837 "uuid": "0ecfde73-4539-4b13-808d-15cb173998d7" 00:08:20.837 } 00:08:20.837 ] 00:08:20.837 }, 00:08:20.837 { 00:08:20.837 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:20.837 "subtype": "NVMe", 00:08:20.837 "listen_addresses": [ 00:08:20.837 { 00:08:20.837 "trtype": "TCP", 00:08:20.837 "adrfam": "IPv4", 00:08:20.837 "traddr": "10.0.0.2", 00:08:20.837 "trsvcid": "4420" 00:08:20.837 } 00:08:20.837 ], 00:08:20.837 "allow_any_host": true, 00:08:20.837 "hosts": [], 00:08:20.837 "serial_number": "SPDK00000000000003", 00:08:20.837 "model_number": "SPDK bdev Controller", 00:08:20.837 "max_namespaces": 32, 00:08:20.837 "min_cntlid": 1, 00:08:20.837 "max_cntlid": 65519, 00:08:20.837 "namespaces": [ 00:08:20.837 { 00:08:20.837 "nsid": 1, 00:08:20.837 "bdev_name": "Null3", 00:08:20.837 "name": "Null3", 00:08:20.837 "nguid": "1D1E6DE3892A4208A66460E867653A9B", 00:08:20.837 "uuid": "1d1e6de3-892a-4208-a664-60e867653a9b" 00:08:20.837 } 00:08:20.837 ] 00:08:20.837 }, 00:08:20.837 { 00:08:20.837 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:20.837 "subtype": "NVMe", 00:08:20.837 "listen_addresses": [ 00:08:20.837 { 00:08:20.837 "trtype": "TCP", 00:08:20.837 "adrfam": "IPv4", 00:08:20.837 "traddr": "10.0.0.2", 00:08:20.837 "trsvcid": "4420" 00:08:20.837 } 00:08:20.837 ], 00:08:20.837 "allow_any_host": true, 00:08:20.837 "hosts": [], 00:08:20.837 "serial_number": "SPDK00000000000004", 00:08:20.837 "model_number": "SPDK bdev Controller", 00:08:20.837 "max_namespaces": 32, 00:08:20.837 "min_cntlid": 1, 00:08:20.837 "max_cntlid": 65519, 00:08:20.837 "namespaces": [ 00:08:20.837 { 00:08:20.837 "nsid": 1, 00:08:20.837 "bdev_name": "Null4", 00:08:20.837 "name": "Null4", 00:08:20.837 "nguid": "8481EA4272CA4FCEBC0C7C5F5D48E938", 00:08:20.837 "uuid": "8481ea42-72ca-4fce-bc0c-7c5f5d48e938" 00:08:20.837 } 00:08:20.837 ] 00:08:20.837 } 00:08:20.837 ] 00:08:20.837 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.837 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:20.837 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:20.837 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:20.837 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.837 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.837 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.837 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:20.837 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.837 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.837 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.837 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:20.837 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:20.837 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.837 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.837 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.837 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:20.837 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.837 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.837 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.837 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:20.837 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:20.838 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.838 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.838 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.838 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:20.838 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.838 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.838 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.838 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:20.838 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:20.838 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.838 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.838 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.838 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:20.838 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.838 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:20.838 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.838 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:20.838 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.838 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:21.098 rmmod nvme_tcp 00:08:21.098 rmmod nvme_fabrics 00:08:21.098 rmmod nvme_keyring 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3657724 ']' 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3657724 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 3657724 ']' 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 3657724 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3657724 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3657724' 00:08:21.098 killing process with pid 3657724 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 3657724 00:08:21.098 13:47:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 3657724 00:08:21.357 13:47:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:21.358 13:47:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:21.358 13:47:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:21.358 13:47:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:21.358 13:47:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:21.358 13:47:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.358 13:47:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:21.358 13:47:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.896 13:47:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:23.896 00:08:23.896 real 0m5.670s 00:08:23.896 user 0m4.837s 00:08:23.896 sys 0m1.907s 00:08:23.896 13:47:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.896 13:47:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:23.896 ************************************ 00:08:23.896 END TEST nvmf_target_discovery 00:08:23.896 ************************************ 00:08:23.896 13:47:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:23.896 13:47:18 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:23.896 13:47:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:23.896 13:47:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.896 13:47:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:23.896 ************************************ 00:08:23.896 START TEST nvmf_referrals 00:08:23.896 ************************************ 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:23.896 * Looking for test storage... 00:08:23.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.896 13:47:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:23.897 13:47:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.897 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:23.897 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:23.897 13:47:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:23.897 13:47:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:25.801 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:25.801 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:25.801 Found net devices under 0000:84:00.0: cvl_0_0 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:25.801 Found net devices under 0000:84:00.1: cvl_0_1 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:25.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:08:25.801 00:08:25.801 --- 10.0.0.2 ping statistics --- 00:08:25.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.801 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:08:25.801 00:08:25.801 --- 10.0.0.1 ping statistics --- 00:08:25.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.801 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3659826 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3659826 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 3659826 ']' 00:08:25.801 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:25.802 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.802 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:25.802 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.802 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:25.802 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.802 [2024-07-15 13:47:20.604825] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:08:25.802 [2024-07-15 13:47:20.604919] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.802 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.060 [2024-07-15 13:47:20.671251] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:26.060 [2024-07-15 13:47:20.783240] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.060 [2024-07-15 13:47:20.783316] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.060 [2024-07-15 13:47:20.783329] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:26.060 [2024-07-15 13:47:20.783340] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:26.060 [2024-07-15 13:47:20.783349] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.060 [2024-07-15 13:47:20.783430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.060 [2024-07-15 13:47:20.784759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.060 [2024-07-15 13:47:20.784828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.060 [2024-07-15 13:47:20.784832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.318 [2024-07-15 13:47:20.932463] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.318 [2024-07-15 13:47:20.944641] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.318 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.319 13:47:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.319 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:26.319 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:26.319 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:26.319 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:26.319 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:26.319 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.319 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.319 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:26.319 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.319 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:26.319 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:26.319 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:26.319 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:26.319 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:26.319 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.319 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:26.319 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:26.578 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:26.578 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:26.578 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:26.578 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.578 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.578 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.578 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:26.578 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.578 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.578 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.578 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:26.578 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.578 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.578 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.578 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:26.578 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:26.578 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.578 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.578 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.578 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:26.578 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:26.578 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:26.578 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:26.578 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.579 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:26.579 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.837 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:27.095 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:27.095 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:27.095 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:27.095 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:27.095 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:27.095 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:27.095 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:27.095 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:27.095 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.095 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.095 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.095 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:27.095 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:27.095 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:27.095 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.095 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:27.095 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.095 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:27.095 13:47:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.095 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:27.095 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:27.095 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:27.095 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:27.095 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:27.095 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:27.095 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:27.095 13:47:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:27.353 13:47:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:27.353 13:47:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:27.353 13:47:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:27.353 13:47:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:27.353 13:47:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:27.353 13:47:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:27.353 13:47:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:27.611 13:47:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:27.611 13:47:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:27.611 13:47:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:27.611 13:47:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:27.611 13:47:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:27.611 13:47:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:27.611 13:47:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:27.611 13:47:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:27.612 13:47:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.612 13:47:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.612 13:47:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.612 13:47:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:27.612 13:47:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:27.612 13:47:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.612 13:47:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.612 13:47:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.612 13:47:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:27.612 13:47:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:27.612 13:47:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:27.612 13:47:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:27.612 13:47:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:27.612 13:47:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:27.612 13:47:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:27.872 13:47:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:27.872 13:47:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:27.872 13:47:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:27.872 13:47:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:27.872 13:47:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:27.872 13:47:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:27.872 13:47:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:27.872 13:47:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:27.872 13:47:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:27.872 13:47:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:27.872 rmmod nvme_tcp 00:08:27.872 rmmod nvme_fabrics 00:08:27.872 rmmod nvme_keyring 00:08:27.872 13:47:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:27.872 13:47:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:27.872 13:47:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:27.872 13:47:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3659826 ']' 00:08:27.872 13:47:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3659826 00:08:27.872 13:47:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 3659826 ']' 00:08:27.872 13:47:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 3659826 00:08:27.872 13:47:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:27.872 13:47:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:27.872 13:47:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3659826 00:08:27.872 13:47:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:27.872 13:47:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:27.872 13:47:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3659826' 00:08:27.872 killing process with pid 3659826 00:08:27.872 13:47:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 3659826 00:08:27.872 13:47:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 3659826 00:08:28.191 13:47:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:28.191 13:47:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:28.191 13:47:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:28.191 13:47:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:28.191 13:47:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:28.191 13:47:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.191 13:47:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:28.191 13:47:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.094 13:47:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:30.094 00:08:30.094 real 0m6.681s 00:08:30.094 user 0m9.449s 00:08:30.094 sys 0m2.196s 00:08:30.094 13:47:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.094 13:47:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.094 ************************************ 00:08:30.094 END TEST nvmf_referrals 00:08:30.094 ************************************ 00:08:30.094 13:47:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:30.094 13:47:24 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:30.094 13:47:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:30.094 13:47:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.094 13:47:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:30.094 ************************************ 00:08:30.094 START TEST nvmf_connect_disconnect 00:08:30.094 ************************************ 00:08:30.094 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:30.353 * Looking for test storage... 00:08:30.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.353 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.353 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:30.353 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.353 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.353 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.353 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.353 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.353 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.353 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.353 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.353 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.353 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.353 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:30.353 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:30.353 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.353 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.353 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.353 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.353 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:30.353 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.353 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.354 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.354 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.354 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.354 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.354 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:30.354 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.354 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:30.354 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:30.354 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:30.354 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.354 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.354 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.354 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:30.354 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:30.354 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:30.354 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:30.354 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:30.354 13:47:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:30.354 13:47:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:30.354 13:47:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.354 13:47:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:30.354 13:47:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:30.354 13:47:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:30.354 13:47:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.354 13:47:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:30.354 13:47:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.354 13:47:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:30.354 13:47:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:30.354 13:47:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:30.354 13:47:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:32.259 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:32.259 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:32.259 Found net devices under 0000:84:00.0: cvl_0_0 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:32.259 Found net devices under 0000:84:00.1: cvl_0_1 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:32.259 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:32.517 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:32.517 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:32.517 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:32.517 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:32.517 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:32.517 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:32.517 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:32.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:08:32.517 00:08:32.517 --- 10.0.0.2 ping statistics --- 00:08:32.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.517 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:08:32.517 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:32.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:08:32.517 00:08:32.517 --- 10.0.0.1 ping statistics --- 00:08:32.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.517 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:08:32.517 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.517 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:32.517 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:32.517 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.517 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:32.517 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:32.517 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.517 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:32.517 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:32.517 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:32.517 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:32.517 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:32.517 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.517 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3662138 00:08:32.517 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:32.518 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3662138 00:08:32.518 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 3662138 ']' 00:08:32.518 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.518 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:32.518 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.518 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:32.518 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.518 [2024-07-15 13:47:27.275425] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:08:32.518 [2024-07-15 13:47:27.275493] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.518 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.518 [2024-07-15 13:47:27.336786] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.776 [2024-07-15 13:47:27.441277] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.776 [2024-07-15 13:47:27.441334] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.776 [2024-07-15 13:47:27.441347] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.776 [2024-07-15 13:47:27.441357] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.776 [2024-07-15 13:47:27.441366] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.776 [2024-07-15 13:47:27.441447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.776 [2024-07-15 13:47:27.441553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.776 [2024-07-15 13:47:27.441628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.776 [2024-07-15 13:47:27.441631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.776 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:32.776 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:32.776 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:32.776 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:32.776 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.776 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.776 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:32.776 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.776 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.776 [2024-07-15 13:47:27.597748] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.776 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.776 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:32.776 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.776 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.035 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.035 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:33.035 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:33.035 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.035 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.035 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.035 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:33.035 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.035 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.035 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.035 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:33.035 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.035 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.035 [2024-07-15 13:47:27.659073] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.035 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.035 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:33.035 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:33.035 13:47:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:35.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.529 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:46.529 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:46.529 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:46.529 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:46.529 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:46.529 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:46.529 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:46.529 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:46.529 rmmod nvme_tcp 00:08:46.529 rmmod nvme_fabrics 00:08:46.787 rmmod nvme_keyring 00:08:46.787 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:46.787 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:46.788 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:46.788 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3662138 ']' 00:08:46.788 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3662138 00:08:46.788 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3662138 ']' 00:08:46.788 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 3662138 00:08:46.788 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:08:46.788 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:46.788 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3662138 00:08:46.788 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:46.788 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:46.788 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3662138' 00:08:46.788 killing process with pid 3662138 00:08:46.788 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 3662138 00:08:46.788 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 3662138 00:08:47.047 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:47.047 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:47.047 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:47.047 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:47.047 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:47.047 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.047 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:47.047 13:47:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.956 13:47:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:48.956 00:08:48.956 real 0m18.813s 00:08:48.956 user 0m56.366s 00:08:48.956 sys 0m3.420s 00:08:48.956 13:47:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:48.956 13:47:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:48.956 ************************************ 00:08:48.956 END TEST nvmf_connect_disconnect 00:08:48.956 ************************************ 00:08:48.956 13:47:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:48.956 13:47:43 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:48.956 13:47:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:48.956 13:47:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.956 13:47:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:48.956 ************************************ 00:08:48.956 START TEST nvmf_multitarget 00:08:48.956 ************************************ 00:08:48.956 13:47:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:49.214 * Looking for test storage... 00:08:49.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:49.214 13:47:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:49.214 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:49.214 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.214 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.214 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.214 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:49.215 13:47:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:51.124 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:51.124 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:51.124 Found net devices under 0000:84:00.0: cvl_0_0 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:51.124 Found net devices under 0000:84:00.1: cvl_0_1 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:51.124 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:51.125 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:51.125 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:51.125 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:51.125 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:51.125 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.125 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:51.125 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:51.125 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:51.125 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:51.125 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:51.125 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:51.125 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:51.125 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:51.383 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:51.383 13:47:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:51.383 13:47:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:51.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:08:51.383 00:08:51.383 --- 10.0.0.2 ping statistics --- 00:08:51.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.383 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:08:51.383 13:47:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:51.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:08:51.383 00:08:51.383 --- 10.0.0.1 ping statistics --- 00:08:51.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.383 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:08:51.383 13:47:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.383 13:47:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:51.383 13:47:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:51.383 13:47:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.383 13:47:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:51.383 13:47:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:51.383 13:47:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.383 13:47:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:51.383 13:47:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:51.383 13:47:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:51.383 13:47:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:51.383 13:47:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:51.383 13:47:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:51.383 13:47:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3665804 00:08:51.383 13:47:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:51.383 13:47:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3665804 00:08:51.383 13:47:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 3665804 ']' 00:08:51.383 13:47:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.383 13:47:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:51.384 13:47:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.384 13:47:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:51.384 13:47:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:51.384 [2024-07-15 13:47:46.091046] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:08:51.384 [2024-07-15 13:47:46.091152] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.384 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.384 [2024-07-15 13:47:46.155296] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:51.645 [2024-07-15 13:47:46.259795] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.645 [2024-07-15 13:47:46.259859] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.645 [2024-07-15 13:47:46.259880] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.645 [2024-07-15 13:47:46.259891] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.645 [2024-07-15 13:47:46.259901] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.645 [2024-07-15 13:47:46.259971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.645 [2024-07-15 13:47:46.260058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.645 [2024-07-15 13:47:46.260120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:51.645 [2024-07-15 13:47:46.260123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.645 13:47:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:51.645 13:47:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:08:51.645 13:47:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:51.645 13:47:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:51.645 13:47:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:51.645 13:47:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.645 13:47:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:51.645 13:47:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:51.645 13:47:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:51.906 13:47:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:51.906 13:47:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:51.906 "nvmf_tgt_1" 00:08:51.906 13:47:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:51.906 "nvmf_tgt_2" 00:08:51.906 13:47:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:51.906 13:47:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:52.164 13:47:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:52.164 13:47:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:52.164 true 00:08:52.164 13:47:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:52.423 true 00:08:52.423 13:47:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:52.423 13:47:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:52.423 13:47:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:52.423 13:47:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:52.423 13:47:47 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:52.423 13:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:52.423 13:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:52.423 13:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:52.423 13:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:52.423 13:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:52.423 13:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:52.423 rmmod nvme_tcp 00:08:52.423 rmmod nvme_fabrics 00:08:52.423 rmmod nvme_keyring 00:08:52.423 13:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:52.423 13:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:52.423 13:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:52.423 13:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3665804 ']' 00:08:52.423 13:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3665804 00:08:52.423 13:47:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 3665804 ']' 00:08:52.423 13:47:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 3665804 00:08:52.423 13:47:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:08:52.423 13:47:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:52.423 13:47:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3665804 00:08:52.423 13:47:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:52.423 13:47:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:52.423 13:47:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3665804' 00:08:52.423 killing process with pid 3665804 00:08:52.423 13:47:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 3665804 00:08:52.424 13:47:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 3665804 00:08:52.991 13:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:52.991 13:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:52.991 13:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:52.991 13:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:52.991 13:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:52.991 13:47:47 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.991 13:47:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:52.991 13:47:47 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.892 13:47:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:54.892 00:08:54.892 real 0m5.773s 00:08:54.892 user 0m6.421s 00:08:54.892 sys 0m1.874s 00:08:54.893 13:47:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:54.893 13:47:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:54.893 ************************************ 00:08:54.893 END TEST nvmf_multitarget 00:08:54.893 ************************************ 00:08:54.893 13:47:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:54.893 13:47:49 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:54.893 13:47:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:54.893 13:47:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.893 13:47:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:54.893 ************************************ 00:08:54.893 START TEST nvmf_rpc 00:08:54.893 ************************************ 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:54.893 * Looking for test storage... 00:08:54.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:54.893 13:47:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:57.426 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:57.426 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:57.426 Found net devices under 0000:84:00.0: cvl_0_0 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:57.426 Found net devices under 0000:84:00.1: cvl_0_1 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:57.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:08:57.426 00:08:57.426 --- 10.0.0.2 ping statistics --- 00:08:57.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.426 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:57.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:08:57.426 00:08:57.426 --- 10.0.0.1 ping statistics --- 00:08:57.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.426 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3667915 00:08:57.426 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3667915 00:08:57.427 13:47:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 3667915 ']' 00:08:57.427 13:47:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.427 13:47:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:57.427 13:47:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.427 13:47:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:57.427 13:47:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:57.427 13:47:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.427 [2024-07-15 13:47:51.979247] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:08:57.427 [2024-07-15 13:47:51.979327] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.427 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.427 [2024-07-15 13:47:52.047748] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:57.427 [2024-07-15 13:47:52.164238] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.427 [2024-07-15 13:47:52.164325] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.427 [2024-07-15 13:47:52.164339] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.427 [2024-07-15 13:47:52.164350] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.427 [2024-07-15 13:47:52.164359] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.427 [2024-07-15 13:47:52.164410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.427 [2024-07-15 13:47:52.164467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.427 [2024-07-15 13:47:52.164545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.427 [2024-07-15 13:47:52.164548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.724 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:57.724 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:57.724 13:47:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:57.724 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:57.724 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.724 13:47:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.724 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:57.724 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.724 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.724 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.724 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:57.724 "tick_rate": 2700000000, 00:08:57.724 "poll_groups": [ 00:08:57.724 { 00:08:57.724 "name": "nvmf_tgt_poll_group_000", 00:08:57.724 "admin_qpairs": 0, 00:08:57.724 "io_qpairs": 0, 00:08:57.724 "current_admin_qpairs": 0, 00:08:57.724 "current_io_qpairs": 0, 00:08:57.724 "pending_bdev_io": 0, 00:08:57.724 "completed_nvme_io": 0, 00:08:57.724 "transports": [] 00:08:57.724 }, 00:08:57.724 { 00:08:57.724 "name": "nvmf_tgt_poll_group_001", 00:08:57.724 "admin_qpairs": 0, 00:08:57.724 "io_qpairs": 0, 00:08:57.724 "current_admin_qpairs": 0, 00:08:57.724 "current_io_qpairs": 0, 00:08:57.724 "pending_bdev_io": 0, 00:08:57.724 "completed_nvme_io": 0, 00:08:57.724 "transports": [] 00:08:57.724 }, 00:08:57.724 { 00:08:57.724 "name": "nvmf_tgt_poll_group_002", 00:08:57.724 "admin_qpairs": 0, 00:08:57.724 "io_qpairs": 0, 00:08:57.724 "current_admin_qpairs": 0, 00:08:57.724 "current_io_qpairs": 0, 00:08:57.724 "pending_bdev_io": 0, 00:08:57.724 "completed_nvme_io": 0, 00:08:57.724 "transports": [] 00:08:57.724 }, 00:08:57.724 { 00:08:57.725 "name": "nvmf_tgt_poll_group_003", 00:08:57.725 "admin_qpairs": 0, 00:08:57.725 "io_qpairs": 0, 00:08:57.725 "current_admin_qpairs": 0, 00:08:57.725 "current_io_qpairs": 0, 00:08:57.725 "pending_bdev_io": 0, 00:08:57.725 "completed_nvme_io": 0, 00:08:57.725 "transports": [] 00:08:57.725 } 00:08:57.725 ] 00:08:57.725 }' 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.725 [2024-07-15 13:47:52.413001] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:57.725 "tick_rate": 2700000000, 00:08:57.725 "poll_groups": [ 00:08:57.725 { 00:08:57.725 "name": "nvmf_tgt_poll_group_000", 00:08:57.725 "admin_qpairs": 0, 00:08:57.725 "io_qpairs": 0, 00:08:57.725 "current_admin_qpairs": 0, 00:08:57.725 "current_io_qpairs": 0, 00:08:57.725 "pending_bdev_io": 0, 00:08:57.725 "completed_nvme_io": 0, 00:08:57.725 "transports": [ 00:08:57.725 { 00:08:57.725 "trtype": "TCP" 00:08:57.725 } 00:08:57.725 ] 00:08:57.725 }, 00:08:57.725 { 00:08:57.725 "name": "nvmf_tgt_poll_group_001", 00:08:57.725 "admin_qpairs": 0, 00:08:57.725 "io_qpairs": 0, 00:08:57.725 "current_admin_qpairs": 0, 00:08:57.725 "current_io_qpairs": 0, 00:08:57.725 "pending_bdev_io": 0, 00:08:57.725 "completed_nvme_io": 0, 00:08:57.725 "transports": [ 00:08:57.725 { 00:08:57.725 "trtype": "TCP" 00:08:57.725 } 00:08:57.725 ] 00:08:57.725 }, 00:08:57.725 { 00:08:57.725 "name": "nvmf_tgt_poll_group_002", 00:08:57.725 "admin_qpairs": 0, 00:08:57.725 "io_qpairs": 0, 00:08:57.725 "current_admin_qpairs": 0, 00:08:57.725 "current_io_qpairs": 0, 00:08:57.725 "pending_bdev_io": 0, 00:08:57.725 "completed_nvme_io": 0, 00:08:57.725 "transports": [ 00:08:57.725 { 00:08:57.725 "trtype": "TCP" 00:08:57.725 } 00:08:57.725 ] 00:08:57.725 }, 00:08:57.725 { 00:08:57.725 "name": "nvmf_tgt_poll_group_003", 00:08:57.725 "admin_qpairs": 0, 00:08:57.725 "io_qpairs": 0, 00:08:57.725 "current_admin_qpairs": 0, 00:08:57.725 "current_io_qpairs": 0, 00:08:57.725 "pending_bdev_io": 0, 00:08:57.725 "completed_nvme_io": 0, 00:08:57.725 "transports": [ 00:08:57.725 { 00:08:57.725 "trtype": "TCP" 00:08:57.725 } 00:08:57.725 ] 00:08:57.725 } 00:08:57.725 ] 00:08:57.725 }' 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.725 Malloc1 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.725 [2024-07-15 13:47:52.552053] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:57.725 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:08:57.985 [2024-07-15 13:47:52.574549] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:08:57.985 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:57.985 could not add new controller: failed to write to nvme-fabrics device 00:08:57.985 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:57.985 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:57.985 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:57.985 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:57.985 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:57.985 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.985 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:57.985 13:47:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.985 13:47:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:58.553 13:47:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:58.553 13:47:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:58.553 13:47:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:58.553 13:47:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:58.553 13:47:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:00.454 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:00.454 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:00.454 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:00.454 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:00.454 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:00.454 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:00.454 13:47:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:00.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.713 13:47:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:00.713 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:00.714 [2024-07-15 13:47:55.353919] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:09:00.714 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:00.714 could not add new controller: failed to write to nvme-fabrics device 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.714 13:47:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:01.280 13:47:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:01.280 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:01.280 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:01.280 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:01.280 13:47:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:03.185 13:47:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:03.185 13:47:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:03.185 13:47:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:03.185 13:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:03.185 13:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:03.185 13:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:03.185 13:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:03.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.444 13:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:03.444 13:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:03.444 13:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:03.444 13:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:03.444 13:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:03.444 13:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:03.444 13:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:03.444 13:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:03.444 13:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.444 13:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.444 13:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.444 13:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:03.444 13:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:03.444 13:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:03.444 13:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.444 13:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.444 13:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.444 13:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:03.444 13:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.444 13:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.444 [2024-07-15 13:47:58.143301] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.444 13:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.444 13:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:03.445 13:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.445 13:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.445 13:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.445 13:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:03.445 13:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.445 13:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.445 13:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.445 13:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:04.011 13:47:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:04.011 13:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:04.011 13:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:04.011 13:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:04.011 13:47:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:06.576 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:06.576 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:06.576 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:06.576 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:06.576 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:06.576 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:06.576 13:48:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:06.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.576 13:48:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.577 [2024-07-15 13:48:00.955128] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.577 13:48:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:06.842 13:48:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:06.842 13:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:06.842 13:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:06.842 13:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:06.842 13:48:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:08.754 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:08.754 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:08.754 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:08.754 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:08.754 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:08.754 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:08.754 13:48:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:09.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.011 13:48:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:09.011 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:09.011 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:09.011 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:09.011 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:09.011 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:09.011 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:09.011 13:48:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:09.011 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.011 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.011 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.011 13:48:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:09.011 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.011 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.011 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.011 13:48:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:09.011 13:48:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:09.011 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.011 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.011 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.011 13:48:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:09.011 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.011 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.011 [2024-07-15 13:48:03.677030] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.011 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.011 13:48:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:09.011 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.011 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.012 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.012 13:48:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:09.012 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.012 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.012 13:48:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.012 13:48:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:09.641 13:48:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:09.641 13:48:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:09.641 13:48:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:09.641 13:48:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:09.641 13:48:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:11.543 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:11.543 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:11.543 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:11.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.803 [2024-07-15 13:48:06.505847] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.803 13:48:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:12.370 13:48:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:12.370 13:48:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:12.370 13:48:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:12.370 13:48:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:12.370 13:48:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:14.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.904 [2024-07-15 13:48:09.241994] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.904 13:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:15.164 13:48:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:15.164 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:15.164 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:15.164 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:15.164 13:48:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:17.698 13:48:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:17.698 13:48:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:17.698 13:48:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:17.698 13:48:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:17.698 13:48:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:17.698 13:48:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:17.698 13:48:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:17.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.698 [2024-07-15 13:48:12.056103] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.698 [2024-07-15 13:48:12.104186] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.698 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.699 [2024-07-15 13:48:12.152336] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.699 [2024-07-15 13:48:12.200498] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.699 [2024-07-15 13:48:12.248684] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:17.699 "tick_rate": 2700000000, 00:09:17.699 "poll_groups": [ 00:09:17.699 { 00:09:17.699 "name": "nvmf_tgt_poll_group_000", 00:09:17.699 "admin_qpairs": 2, 00:09:17.699 "io_qpairs": 84, 00:09:17.699 "current_admin_qpairs": 0, 00:09:17.699 "current_io_qpairs": 0, 00:09:17.699 "pending_bdev_io": 0, 00:09:17.699 "completed_nvme_io": 229, 00:09:17.699 "transports": [ 00:09:17.699 { 00:09:17.699 "trtype": "TCP" 00:09:17.699 } 00:09:17.699 ] 00:09:17.699 }, 00:09:17.699 { 00:09:17.699 "name": "nvmf_tgt_poll_group_001", 00:09:17.699 "admin_qpairs": 2, 00:09:17.699 "io_qpairs": 84, 00:09:17.699 "current_admin_qpairs": 0, 00:09:17.699 "current_io_qpairs": 0, 00:09:17.699 "pending_bdev_io": 0, 00:09:17.699 "completed_nvme_io": 184, 00:09:17.699 "transports": [ 00:09:17.699 { 00:09:17.699 "trtype": "TCP" 00:09:17.699 } 00:09:17.699 ] 00:09:17.699 }, 00:09:17.699 { 00:09:17.699 "name": "nvmf_tgt_poll_group_002", 00:09:17.699 "admin_qpairs": 1, 00:09:17.699 "io_qpairs": 84, 00:09:17.699 "current_admin_qpairs": 0, 00:09:17.699 "current_io_qpairs": 0, 00:09:17.699 "pending_bdev_io": 0, 00:09:17.699 "completed_nvme_io": 179, 00:09:17.699 "transports": [ 00:09:17.699 { 00:09:17.699 "trtype": "TCP" 00:09:17.699 } 00:09:17.699 ] 00:09:17.699 }, 00:09:17.699 { 00:09:17.699 "name": "nvmf_tgt_poll_group_003", 00:09:17.699 "admin_qpairs": 2, 00:09:17.699 "io_qpairs": 84, 00:09:17.699 "current_admin_qpairs": 0, 00:09:17.699 "current_io_qpairs": 0, 00:09:17.699 "pending_bdev_io": 0, 00:09:17.699 "completed_nvme_io": 94, 00:09:17.699 "transports": [ 00:09:17.699 { 00:09:17.699 "trtype": "TCP" 00:09:17.699 } 00:09:17.699 ] 00:09:17.699 } 00:09:17.699 ] 00:09:17.699 }' 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:17.699 rmmod nvme_tcp 00:09:17.699 rmmod nvme_fabrics 00:09:17.699 rmmod nvme_keyring 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3667915 ']' 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3667915 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 3667915 ']' 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 3667915 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3667915 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3667915' 00:09:17.699 killing process with pid 3667915 00:09:17.699 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 3667915 00:09:17.700 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 3667915 00:09:17.958 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:17.958 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:17.958 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:17.958 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:17.958 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:17.958 13:48:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.958 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:17.958 13:48:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.487 13:48:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:20.487 00:09:20.487 real 0m25.188s 00:09:20.487 user 1m21.648s 00:09:20.487 sys 0m3.997s 00:09:20.487 13:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.487 13:48:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.487 ************************************ 00:09:20.487 END TEST nvmf_rpc 00:09:20.487 ************************************ 00:09:20.487 13:48:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:20.487 13:48:14 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:20.487 13:48:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:20.487 13:48:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.487 13:48:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:20.487 ************************************ 00:09:20.487 START TEST nvmf_invalid 00:09:20.487 ************************************ 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:20.487 * Looking for test storage... 00:09:20.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:20.487 13:48:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:20.488 13:48:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:20.488 13:48:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:20.488 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:20.488 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.488 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:20.488 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:20.488 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:20.488 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.488 13:48:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:20.488 13:48:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.488 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:20.488 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:20.488 13:48:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:20.488 13:48:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:22.392 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:22.392 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:22.392 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:22.392 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:22.393 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:22.393 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:22.393 Found net devices under 0000:84:00.0: cvl_0_0 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:22.393 Found net devices under 0000:84:00.1: cvl_0_1 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:22.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:22.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:09:22.393 00:09:22.393 --- 10.0.0.2 ping statistics --- 00:09:22.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.393 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:22.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:22.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:09:22.393 00:09:22.393 --- 10.0.0.1 ping statistics --- 00:09:22.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.393 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:22.393 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:22.652 13:48:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:22.652 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:22.652 13:48:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:22.652 13:48:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:22.652 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3673167 00:09:22.652 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:22.652 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3673167 00:09:22.652 13:48:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 3673167 ']' 00:09:22.652 13:48:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.652 13:48:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:22.652 13:48:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.652 13:48:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:22.652 13:48:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:22.652 [2024-07-15 13:48:17.310898] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:09:22.652 [2024-07-15 13:48:17.310988] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.652 EAL: No free 2048 kB hugepages reported on node 1 00:09:22.652 [2024-07-15 13:48:17.375090] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:22.652 [2024-07-15 13:48:17.477680] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:22.652 [2024-07-15 13:48:17.477751] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:22.652 [2024-07-15 13:48:17.477766] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:22.652 [2024-07-15 13:48:17.477776] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:22.652 [2024-07-15 13:48:17.477785] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:22.652 [2024-07-15 13:48:17.477877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.652 [2024-07-15 13:48:17.477941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:22.652 [2024-07-15 13:48:17.478008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:22.652 [2024-07-15 13:48:17.478011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.910 13:48:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:22.910 13:48:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:09:22.910 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:22.910 13:48:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:22.910 13:48:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:22.910 13:48:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:22.910 13:48:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:22.910 13:48:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode13618 00:09:23.168 [2024-07-15 13:48:17.862199] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:23.168 13:48:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:23.168 { 00:09:23.168 "nqn": "nqn.2016-06.io.spdk:cnode13618", 00:09:23.168 "tgt_name": "foobar", 00:09:23.168 "method": "nvmf_create_subsystem", 00:09:23.168 "req_id": 1 00:09:23.168 } 00:09:23.168 Got JSON-RPC error response 00:09:23.168 response: 00:09:23.168 { 00:09:23.168 "code": -32603, 00:09:23.168 "message": "Unable to find target foobar" 00:09:23.168 }' 00:09:23.168 13:48:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:23.168 { 00:09:23.168 "nqn": "nqn.2016-06.io.spdk:cnode13618", 00:09:23.168 "tgt_name": "foobar", 00:09:23.168 "method": "nvmf_create_subsystem", 00:09:23.168 "req_id": 1 00:09:23.168 } 00:09:23.168 Got JSON-RPC error response 00:09:23.168 response: 00:09:23.168 { 00:09:23.168 "code": -32603, 00:09:23.168 "message": "Unable to find target foobar" 00:09:23.168 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:23.168 13:48:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:23.168 13:48:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode19432 00:09:23.425 [2024-07-15 13:48:18.159210] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19432: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:23.425 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:23.425 { 00:09:23.425 "nqn": "nqn.2016-06.io.spdk:cnode19432", 00:09:23.425 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:23.425 "method": "nvmf_create_subsystem", 00:09:23.425 "req_id": 1 00:09:23.425 } 00:09:23.425 Got JSON-RPC error response 00:09:23.425 response: 00:09:23.425 { 00:09:23.425 "code": -32602, 00:09:23.425 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:23.425 }' 00:09:23.425 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:23.425 { 00:09:23.425 "nqn": "nqn.2016-06.io.spdk:cnode19432", 00:09:23.425 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:23.425 "method": "nvmf_create_subsystem", 00:09:23.425 "req_id": 1 00:09:23.425 } 00:09:23.425 Got JSON-RPC error response 00:09:23.425 response: 00:09:23.425 { 00:09:23.425 "code": -32602, 00:09:23.426 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:23.426 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:23.426 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:23.426 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode27375 00:09:23.683 [2024-07-15 13:48:18.420057] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27375: invalid model number 'SPDK_Controller' 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:23.683 { 00:09:23.683 "nqn": "nqn.2016-06.io.spdk:cnode27375", 00:09:23.683 "model_number": "SPDK_Controller\u001f", 00:09:23.683 "method": "nvmf_create_subsystem", 00:09:23.683 "req_id": 1 00:09:23.683 } 00:09:23.683 Got JSON-RPC error response 00:09:23.683 response: 00:09:23.683 { 00:09:23.683 "code": -32602, 00:09:23.683 "message": "Invalid MN SPDK_Controller\u001f" 00:09:23.683 }' 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:23.683 { 00:09:23.683 "nqn": "nqn.2016-06.io.spdk:cnode27375", 00:09:23.683 "model_number": "SPDK_Controller\u001f", 00:09:23.683 "method": "nvmf_create_subsystem", 00:09:23.683 "req_id": 1 00:09:23.683 } 00:09:23.683 Got JSON-RPC error response 00:09:23.683 response: 00:09:23.683 { 00:09:23.683 "code": -32602, 00:09:23.683 "message": "Invalid MN SPDK_Controller\u001f" 00:09:23.683 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:23.683 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ c == \- ]] 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'cKq_#X@/10it]2`<22?c~' 00:09:23.684 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'cKq_#X@/10it]2`<22?c~' nqn.2016-06.io.spdk:cnode29133 00:09:23.941 [2024-07-15 13:48:18.745173] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29133: invalid serial number 'cKq_#X@/10it]2`<22?c~' 00:09:23.941 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:23.941 { 00:09:23.941 "nqn": "nqn.2016-06.io.spdk:cnode29133", 00:09:23.941 "serial_number": "cKq_#X@/10it]2`<22?c~", 00:09:23.941 "method": "nvmf_create_subsystem", 00:09:23.941 "req_id": 1 00:09:23.941 } 00:09:23.941 Got JSON-RPC error response 00:09:23.941 response: 00:09:23.941 { 00:09:23.941 "code": -32602, 00:09:23.941 "message": "Invalid SN cKq_#X@/10it]2`<22?c~" 00:09:23.941 }' 00:09:23.941 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:23.941 { 00:09:23.941 "nqn": "nqn.2016-06.io.spdk:cnode29133", 00:09:23.941 "serial_number": "cKq_#X@/10it]2`<22?c~", 00:09:23.941 "method": "nvmf_create_subsystem", 00:09:23.941 "req_id": 1 00:09:23.941 } 00:09:23.941 Got JSON-RPC error response 00:09:23.941 response: 00:09:23.941 { 00:09:23.941 "code": -32602, 00:09:23.941 "message": "Invalid SN cKq_#X@/10it]2`<22?c~" 00:09:23.941 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:23.941 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:23.941 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:23.941 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:23.941 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:23.941 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:23.941 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:23.941 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:23.941 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:09:23.941 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:23.941 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:09:23.941 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:23.941 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:23.941 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:23.941 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:23.941 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:23.941 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:23.941 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:23.941 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:09:23.941 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:23.941 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:09:23.941 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:23.941 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:23.941 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:09:23.941 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:23.941 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:09:23.941 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:23.941 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:09:24.199 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ u == \- ]] 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'u#Yk?5$GCVv\D#tHNN54sD1~#vbmBFxK8)[8{5.X"' 00:09:24.200 13:48:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'u#Yk?5$GCVv\D#tHNN54sD1~#vbmBFxK8)[8{5.X"' nqn.2016-06.io.spdk:cnode29871 00:09:24.457 [2024-07-15 13:48:19.166496] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29871: invalid model number 'u#Yk?5$GCVv\D#tHNN54sD1~#vbmBFxK8)[8{5.X"' 00:09:24.457 13:48:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:24.457 { 00:09:24.457 "nqn": "nqn.2016-06.io.spdk:cnode29871", 00:09:24.457 "model_number": "u#Yk?5$GCVv\\D#tHNN54sD1~#vbmBFxK8)[8{5.X\"", 00:09:24.457 "method": "nvmf_create_subsystem", 00:09:24.457 "req_id": 1 00:09:24.457 } 00:09:24.457 Got JSON-RPC error response 00:09:24.457 response: 00:09:24.457 { 00:09:24.457 "code": -32602, 00:09:24.457 "message": "Invalid MN u#Yk?5$GCVv\\D#tHNN54sD1~#vbmBFxK8)[8{5.X\"" 00:09:24.457 }' 00:09:24.457 13:48:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:24.457 { 00:09:24.457 "nqn": "nqn.2016-06.io.spdk:cnode29871", 00:09:24.457 "model_number": "u#Yk?5$GCVv\\D#tHNN54sD1~#vbmBFxK8)[8{5.X\"", 00:09:24.457 "method": "nvmf_create_subsystem", 00:09:24.457 "req_id": 1 00:09:24.457 } 00:09:24.457 Got JSON-RPC error response 00:09:24.457 response: 00:09:24.457 { 00:09:24.457 "code": -32602, 00:09:24.457 "message": "Invalid MN u#Yk?5$GCVv\\D#tHNN54sD1~#vbmBFxK8)[8{5.X\"" 00:09:24.457 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:24.457 13:48:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:24.714 [2024-07-15 13:48:19.451542] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.715 13:48:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:24.972 13:48:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:24.972 13:48:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:24.972 13:48:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:24.972 13:48:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:24.972 13:48:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:25.229 [2024-07-15 13:48:19.945171] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:25.229 13:48:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:25.229 { 00:09:25.229 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:25.229 "listen_address": { 00:09:25.229 "trtype": "tcp", 00:09:25.229 "traddr": "", 00:09:25.229 "trsvcid": "4421" 00:09:25.229 }, 00:09:25.229 "method": "nvmf_subsystem_remove_listener", 00:09:25.229 "req_id": 1 00:09:25.229 } 00:09:25.229 Got JSON-RPC error response 00:09:25.229 response: 00:09:25.229 { 00:09:25.229 "code": -32602, 00:09:25.229 "message": "Invalid parameters" 00:09:25.229 }' 00:09:25.229 13:48:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:25.229 { 00:09:25.229 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:25.229 "listen_address": { 00:09:25.229 "trtype": "tcp", 00:09:25.229 "traddr": "", 00:09:25.229 "trsvcid": "4421" 00:09:25.229 }, 00:09:25.229 "method": "nvmf_subsystem_remove_listener", 00:09:25.229 "req_id": 1 00:09:25.229 } 00:09:25.230 Got JSON-RPC error response 00:09:25.230 response: 00:09:25.230 { 00:09:25.230 "code": -32602, 00:09:25.230 "message": "Invalid parameters" 00:09:25.230 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:25.230 13:48:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30015 -i 0 00:09:25.488 [2024-07-15 13:48:20.193997] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30015: invalid cntlid range [0-65519] 00:09:25.488 13:48:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:25.488 { 00:09:25.488 "nqn": "nqn.2016-06.io.spdk:cnode30015", 00:09:25.488 "min_cntlid": 0, 00:09:25.488 "method": "nvmf_create_subsystem", 00:09:25.488 "req_id": 1 00:09:25.488 } 00:09:25.488 Got JSON-RPC error response 00:09:25.488 response: 00:09:25.488 { 00:09:25.488 "code": -32602, 00:09:25.488 "message": "Invalid cntlid range [0-65519]" 00:09:25.488 }' 00:09:25.488 13:48:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:25.488 { 00:09:25.488 "nqn": "nqn.2016-06.io.spdk:cnode30015", 00:09:25.488 "min_cntlid": 0, 00:09:25.488 "method": "nvmf_create_subsystem", 00:09:25.488 "req_id": 1 00:09:25.488 } 00:09:25.488 Got JSON-RPC error response 00:09:25.488 response: 00:09:25.488 { 00:09:25.488 "code": -32602, 00:09:25.488 "message": "Invalid cntlid range [0-65519]" 00:09:25.488 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:25.488 13:48:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23717 -i 65520 00:09:25.745 [2024-07-15 13:48:20.438783] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23717: invalid cntlid range [65520-65519] 00:09:25.745 13:48:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:25.745 { 00:09:25.745 "nqn": "nqn.2016-06.io.spdk:cnode23717", 00:09:25.745 "min_cntlid": 65520, 00:09:25.745 "method": "nvmf_create_subsystem", 00:09:25.745 "req_id": 1 00:09:25.745 } 00:09:25.745 Got JSON-RPC error response 00:09:25.745 response: 00:09:25.745 { 00:09:25.745 "code": -32602, 00:09:25.745 "message": "Invalid cntlid range [65520-65519]" 00:09:25.745 }' 00:09:25.745 13:48:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:25.745 { 00:09:25.745 "nqn": "nqn.2016-06.io.spdk:cnode23717", 00:09:25.745 "min_cntlid": 65520, 00:09:25.745 "method": "nvmf_create_subsystem", 00:09:25.745 "req_id": 1 00:09:25.745 } 00:09:25.745 Got JSON-RPC error response 00:09:25.745 response: 00:09:25.745 { 00:09:25.745 "code": -32602, 00:09:25.745 "message": "Invalid cntlid range [65520-65519]" 00:09:25.745 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:25.745 13:48:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18783 -I 0 00:09:26.002 [2024-07-15 13:48:20.695632] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18783: invalid cntlid range [1-0] 00:09:26.002 13:48:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:26.002 { 00:09:26.002 "nqn": "nqn.2016-06.io.spdk:cnode18783", 00:09:26.002 "max_cntlid": 0, 00:09:26.002 "method": "nvmf_create_subsystem", 00:09:26.002 "req_id": 1 00:09:26.002 } 00:09:26.002 Got JSON-RPC error response 00:09:26.002 response: 00:09:26.002 { 00:09:26.002 "code": -32602, 00:09:26.002 "message": "Invalid cntlid range [1-0]" 00:09:26.002 }' 00:09:26.002 13:48:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:26.002 { 00:09:26.002 "nqn": "nqn.2016-06.io.spdk:cnode18783", 00:09:26.002 "max_cntlid": 0, 00:09:26.002 "method": "nvmf_create_subsystem", 00:09:26.002 "req_id": 1 00:09:26.002 } 00:09:26.002 Got JSON-RPC error response 00:09:26.002 response: 00:09:26.002 { 00:09:26.002 "code": -32602, 00:09:26.002 "message": "Invalid cntlid range [1-0]" 00:09:26.002 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:26.002 13:48:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17822 -I 65520 00:09:26.259 [2024-07-15 13:48:20.936398] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17822: invalid cntlid range [1-65520] 00:09:26.259 13:48:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:26.259 { 00:09:26.259 "nqn": "nqn.2016-06.io.spdk:cnode17822", 00:09:26.259 "max_cntlid": 65520, 00:09:26.259 "method": "nvmf_create_subsystem", 00:09:26.259 "req_id": 1 00:09:26.259 } 00:09:26.259 Got JSON-RPC error response 00:09:26.259 response: 00:09:26.259 { 00:09:26.259 "code": -32602, 00:09:26.259 "message": "Invalid cntlid range [1-65520]" 00:09:26.259 }' 00:09:26.259 13:48:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:26.259 { 00:09:26.259 "nqn": "nqn.2016-06.io.spdk:cnode17822", 00:09:26.259 "max_cntlid": 65520, 00:09:26.259 "method": "nvmf_create_subsystem", 00:09:26.259 "req_id": 1 00:09:26.259 } 00:09:26.259 Got JSON-RPC error response 00:09:26.259 response: 00:09:26.259 { 00:09:26.259 "code": -32602, 00:09:26.259 "message": "Invalid cntlid range [1-65520]" 00:09:26.259 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:26.259 13:48:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10072 -i 6 -I 5 00:09:26.516 [2024-07-15 13:48:21.189262] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10072: invalid cntlid range [6-5] 00:09:26.516 13:48:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:26.516 { 00:09:26.516 "nqn": "nqn.2016-06.io.spdk:cnode10072", 00:09:26.516 "min_cntlid": 6, 00:09:26.516 "max_cntlid": 5, 00:09:26.516 "method": "nvmf_create_subsystem", 00:09:26.516 "req_id": 1 00:09:26.516 } 00:09:26.516 Got JSON-RPC error response 00:09:26.516 response: 00:09:26.516 { 00:09:26.516 "code": -32602, 00:09:26.516 "message": "Invalid cntlid range [6-5]" 00:09:26.516 }' 00:09:26.516 13:48:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:26.516 { 00:09:26.516 "nqn": "nqn.2016-06.io.spdk:cnode10072", 00:09:26.516 "min_cntlid": 6, 00:09:26.516 "max_cntlid": 5, 00:09:26.516 "method": "nvmf_create_subsystem", 00:09:26.516 "req_id": 1 00:09:26.516 } 00:09:26.516 Got JSON-RPC error response 00:09:26.516 response: 00:09:26.516 { 00:09:26.516 "code": -32602, 00:09:26.516 "message": "Invalid cntlid range [6-5]" 00:09:26.516 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:26.516 13:48:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:26.516 13:48:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:26.516 { 00:09:26.516 "name": "foobar", 00:09:26.516 "method": "nvmf_delete_target", 00:09:26.516 "req_id": 1 00:09:26.516 } 00:09:26.516 Got JSON-RPC error response 00:09:26.516 response: 00:09:26.516 { 00:09:26.516 "code": -32602, 00:09:26.516 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:26.516 }' 00:09:26.516 13:48:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:26.516 { 00:09:26.516 "name": "foobar", 00:09:26.516 "method": "nvmf_delete_target", 00:09:26.517 "req_id": 1 00:09:26.517 } 00:09:26.517 Got JSON-RPC error response 00:09:26.517 response: 00:09:26.517 { 00:09:26.517 "code": -32602, 00:09:26.517 "message": "The specified target doesn't exist, cannot delete it." 00:09:26.517 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:26.517 13:48:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:26.517 13:48:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:26.517 13:48:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:26.517 13:48:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:26.517 13:48:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:26.517 13:48:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:26.517 13:48:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:26.517 13:48:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:26.517 rmmod nvme_tcp 00:09:26.517 rmmod nvme_fabrics 00:09:26.775 rmmod nvme_keyring 00:09:26.775 13:48:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:26.775 13:48:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:26.775 13:48:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:26.775 13:48:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3673167 ']' 00:09:26.775 13:48:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3673167 00:09:26.775 13:48:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 3673167 ']' 00:09:26.775 13:48:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 3673167 00:09:26.775 13:48:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:09:26.775 13:48:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:26.775 13:48:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3673167 00:09:26.775 13:48:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:26.775 13:48:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:26.775 13:48:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3673167' 00:09:26.775 killing process with pid 3673167 00:09:26.775 13:48:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 3673167 00:09:26.775 13:48:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 3673167 00:09:27.033 13:48:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:27.033 13:48:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:27.033 13:48:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:27.033 13:48:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:27.033 13:48:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:27.033 13:48:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.033 13:48:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:27.033 13:48:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.932 13:48:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:28.932 00:09:28.932 real 0m8.883s 00:09:28.932 user 0m20.493s 00:09:28.932 sys 0m2.508s 00:09:28.932 13:48:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:28.933 13:48:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:28.933 ************************************ 00:09:28.933 END TEST nvmf_invalid 00:09:28.933 ************************************ 00:09:28.933 13:48:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:28.933 13:48:23 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:28.933 13:48:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:28.933 13:48:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:28.933 13:48:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:29.190 ************************************ 00:09:29.190 START TEST nvmf_abort 00:09:29.190 ************************************ 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:29.190 * Looking for test storage... 00:09:29.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:29.190 13:48:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:31.086 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:31.086 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:31.086 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.087 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:31.087 Found net devices under 0000:84:00.0: cvl_0_0 00:09:31.087 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.087 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:31.087 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.087 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:31.087 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.087 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:31.087 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:31.087 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.087 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:31.087 Found net devices under 0000:84:00.1: cvl_0_1 00:09:31.087 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.087 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:31.087 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:31.087 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:31.087 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:31.087 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:31.087 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.087 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.087 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:31.087 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:31.087 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:31.087 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:31.087 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:31.087 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:31.087 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.087 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:31.087 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:31.087 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:31.087 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:31.345 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:31.345 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:31.345 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:31.345 13:48:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:31.345 13:48:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:31.345 13:48:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:31.345 13:48:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:31.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:09:31.345 00:09:31.345 --- 10.0.0.2 ping statistics --- 00:09:31.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.345 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:09:31.345 13:48:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:31.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:09:31.345 00:09:31.345 --- 10.0.0.1 ping statistics --- 00:09:31.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.345 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:09:31.345 13:48:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.345 13:48:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:31.345 13:48:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:31.345 13:48:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.345 13:48:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:31.345 13:48:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:31.345 13:48:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.345 13:48:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:31.345 13:48:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:31.345 13:48:26 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:31.345 13:48:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:31.345 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:31.345 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:31.345 13:48:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3675706 00:09:31.345 13:48:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:31.345 13:48:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3675706 00:09:31.345 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 3675706 ']' 00:09:31.345 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.345 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:31.345 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.345 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:31.345 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:31.345 [2024-07-15 13:48:26.098930] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:09:31.345 [2024-07-15 13:48:26.099011] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.345 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.345 [2024-07-15 13:48:26.166587] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:31.603 [2024-07-15 13:48:26.274642] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.603 [2024-07-15 13:48:26.274706] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.603 [2024-07-15 13:48:26.274734] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.603 [2024-07-15 13:48:26.274752] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.603 [2024-07-15 13:48:26.274763] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.603 [2024-07-15 13:48:26.274906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:31.603 [2024-07-15 13:48:26.274967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:31.603 [2024-07-15 13:48:26.274970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.603 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:31.603 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:09:31.603 13:48:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:31.603 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:31.603 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:31.603 13:48:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.603 13:48:26 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:31.603 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.603 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:31.603 [2024-07-15 13:48:26.423605] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:31.603 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.603 13:48:26 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:31.603 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.603 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:31.861 Malloc0 00:09:31.861 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.861 13:48:26 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:31.861 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.861 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:31.861 Delay0 00:09:31.861 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.861 13:48:26 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:31.861 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.861 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:31.861 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.861 13:48:26 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:31.861 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.861 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:31.861 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.861 13:48:26 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:31.861 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.861 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:31.861 [2024-07-15 13:48:26.493826] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:31.861 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.861 13:48:26 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:31.861 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.861 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:31.861 13:48:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.861 13:48:26 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:31.861 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.861 [2024-07-15 13:48:26.598663] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:34.409 Initializing NVMe Controllers 00:09:34.409 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:34.409 controller IO queue size 128 less than required 00:09:34.409 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:34.409 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:34.409 Initialization complete. Launching workers. 00:09:34.409 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33015 00:09:34.409 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33076, failed to submit 62 00:09:34.409 success 33019, unsuccess 57, failed 0 00:09:34.409 13:48:28 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:34.409 13:48:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:34.409 13:48:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:34.409 13:48:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:34.409 13:48:28 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:34.409 13:48:28 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:34.409 13:48:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:34.409 13:48:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:34.409 13:48:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:34.409 13:48:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:34.409 13:48:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:34.409 13:48:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:34.409 rmmod nvme_tcp 00:09:34.409 rmmod nvme_fabrics 00:09:34.409 rmmod nvme_keyring 00:09:34.409 13:48:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:34.409 13:48:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:34.409 13:48:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:34.409 13:48:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3675706 ']' 00:09:34.409 13:48:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3675706 00:09:34.409 13:48:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 3675706 ']' 00:09:34.409 13:48:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 3675706 00:09:34.409 13:48:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:09:34.409 13:48:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:34.409 13:48:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3675706 00:09:34.409 13:48:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:34.409 13:48:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:34.409 13:48:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3675706' 00:09:34.409 killing process with pid 3675706 00:09:34.409 13:48:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 3675706 00:09:34.409 13:48:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 3675706 00:09:34.409 13:48:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:34.409 13:48:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:34.409 13:48:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:34.409 13:48:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:34.409 13:48:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:34.409 13:48:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.409 13:48:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:34.409 13:48:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.949 13:48:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:36.949 00:09:36.949 real 0m7.388s 00:09:36.949 user 0m10.786s 00:09:36.949 sys 0m2.612s 00:09:36.949 13:48:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:36.949 13:48:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:36.949 ************************************ 00:09:36.949 END TEST nvmf_abort 00:09:36.949 ************************************ 00:09:36.949 13:48:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:36.949 13:48:31 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:36.949 13:48:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:36.949 13:48:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:36.949 13:48:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:36.949 ************************************ 00:09:36.949 START TEST nvmf_ns_hotplug_stress 00:09:36.949 ************************************ 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:36.949 * Looking for test storage... 00:09:36.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:36.949 13:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:38.844 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:38.844 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:38.844 Found net devices under 0000:84:00.0: cvl_0_0 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:38.844 Found net devices under 0000:84:00.1: cvl_0_1 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:38.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:09:38.844 00:09:38.844 --- 10.0.0.2 ping statistics --- 00:09:38.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.844 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:38.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:09:38.844 00:09:38.844 --- 10.0.0.1 ping statistics --- 00:09:38.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.844 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3678067 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:38.844 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3678067 00:09:38.845 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 3678067 ']' 00:09:38.845 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.845 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:38.845 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.845 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:38.845 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:39.102 [2024-07-15 13:48:33.694489] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:09:39.102 [2024-07-15 13:48:33.694561] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.102 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.102 [2024-07-15 13:48:33.759323] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:39.102 [2024-07-15 13:48:33.871404] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:39.102 [2024-07-15 13:48:33.871468] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:39.102 [2024-07-15 13:48:33.871482] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:39.102 [2024-07-15 13:48:33.871493] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:39.102 [2024-07-15 13:48:33.871503] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:39.102 [2024-07-15 13:48:33.871593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:39.102 [2024-07-15 13:48:33.871654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:39.102 [2024-07-15 13:48:33.871658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.360 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:39.360 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:09:39.360 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:39.360 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:39.360 13:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:39.360 13:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.360 13:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:39.360 13:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:39.617 [2024-07-15 13:48:34.292893] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:39.617 13:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:39.874 13:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:40.131 [2024-07-15 13:48:34.799406] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:40.131 13:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:40.388 13:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:40.646 Malloc0 00:09:40.646 13:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:40.903 Delay0 00:09:40.903 13:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.160 13:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:41.417 NULL1 00:09:41.417 13:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:41.674 13:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3678379 00:09:41.674 13:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:41.674 13:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:09:41.674 13:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.674 EAL: No free 2048 kB hugepages reported on node 1 00:09:42.605 Read completed with error (sct=0, sc=11) 00:09:42.861 13:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.118 13:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:43.118 13:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:43.375 true 00:09:43.375 13:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:09:43.375 13:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.961 13:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.525 13:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:44.525 13:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:44.525 true 00:09:44.782 13:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:09:44.782 13:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.039 13:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.295 13:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:45.295 13:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:45.295 true 00:09:45.552 13:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:09:45.552 13:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.809 13:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.065 13:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:46.065 13:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:46.065 true 00:09:46.321 13:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:09:46.321 13:48:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.251 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:47.251 13:48:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.508 13:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:47.508 13:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:47.765 true 00:09:47.765 13:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:09:47.765 13:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.022 13:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.279 13:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:48.279 13:48:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:48.535 true 00:09:48.535 13:48:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:09:48.535 13:48:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:49.464 13:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:49.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:49.464 13:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:49.464 13:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:49.720 true 00:09:49.720 13:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:09:49.720 13:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.977 13:48:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.235 13:48:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:50.235 13:48:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:50.492 true 00:09:50.492 13:48:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:09:50.492 13:48:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.422 13:48:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.679 13:48:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:51.679 13:48:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:51.936 true 00:09:51.936 13:48:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:09:51.936 13:48:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.193 13:48:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.450 13:48:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:52.450 13:48:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:52.706 true 00:09:52.706 13:48:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:09:52.706 13:48:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.636 13:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.893 13:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:53.893 13:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:54.151 true 00:09:54.151 13:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:09:54.151 13:48:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.409 13:48:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.666 13:48:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:54.666 13:48:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:54.923 true 00:09:54.923 13:48:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:09:54.923 13:48:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:55.855 13:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.112 13:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:56.112 13:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:56.112 true 00:09:56.368 13:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:09:56.368 13:48:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.368 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.623 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:56.623 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:56.879 true 00:09:56.879 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:09:56.879 13:48:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.807 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:57.807 13:48:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.082 13:48:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:58.082 13:48:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:58.346 true 00:09:58.346 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:09:58.346 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.610 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.868 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:58.868 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:59.125 true 00:09:59.125 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:09:59.125 13:48:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.052 13:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.052 13:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:00.052 13:48:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:00.308 true 00:10:00.308 13:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:10:00.308 13:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.565 13:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.823 13:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:00.823 13:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:01.080 true 00:10:01.080 13:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:10:01.080 13:48:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.010 13:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.010 13:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:02.010 13:48:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:02.267 true 00:10:02.267 13:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:10:02.267 13:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.524 13:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.783 13:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:02.783 13:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:03.040 true 00:10:03.040 13:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:10:03.040 13:48:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.977 13:48:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.235 13:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:04.235 13:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:04.493 true 00:10:04.493 13:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:10:04.493 13:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.751 13:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.009 13:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:05.009 13:48:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:05.266 true 00:10:05.266 13:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:10:05.266 13:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.523 13:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.780 13:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:05.780 13:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:06.037 true 00:10:06.037 13:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:10:06.037 13:49:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.415 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.415 13:49:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.415 13:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:07.415 13:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:07.672 true 00:10:07.672 13:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:10:07.672 13:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.930 13:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.187 13:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:08.187 13:49:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:08.444 true 00:10:08.444 13:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:10:08.444 13:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:09.379 13:49:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:09.379 13:49:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:09.379 13:49:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:09.636 true 00:10:09.636 13:49:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:10:09.636 13:49:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.894 13:49:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.151 13:49:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:10.151 13:49:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:10.409 true 00:10:10.409 13:49:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:10:10.409 13:49:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.342 13:49:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.342 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.599 13:49:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:11.599 13:49:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:11.856 true 00:10:11.856 Initializing NVMe Controllers 00:10:11.856 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:11.856 Controller IO queue size 128, less than required. 00:10:11.856 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:11.856 Controller IO queue size 128, less than required. 00:10:11.856 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:11.856 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:11.856 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:11.856 Initialization complete. Launching workers. 00:10:11.856 ======================================================== 00:10:11.857 Latency(us) 00:10:11.857 Device Information : IOPS MiB/s Average min max 00:10:11.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 763.31 0.37 87125.81 2380.37 1017641.97 00:10:11.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11259.88 5.50 11367.49 2857.65 457884.76 00:10:11.857 ======================================================== 00:10:11.857 Total : 12023.19 5.87 16177.11 2380.37 1017641.97 00:10:11.857 00:10:11.857 13:49:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:10:11.857 13:49:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.114 13:49:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.372 13:49:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:12.372 13:49:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:12.629 true 00:10:12.629 13:49:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3678379 00:10:12.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3678379) - No such process 00:10:12.629 13:49:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3678379 00:10:12.629 13:49:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.886 13:49:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:13.144 13:49:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:13.144 13:49:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:13.144 13:49:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:13.144 13:49:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:13.144 13:49:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:13.401 null0 00:10:13.401 13:49:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:13.401 13:49:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:13.401 13:49:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:13.658 null1 00:10:13.658 13:49:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:13.658 13:49:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:13.658 13:49:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:13.916 null2 00:10:13.916 13:49:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:13.916 13:49:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:13.916 13:49:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:14.173 null3 00:10:14.173 13:49:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:14.173 13:49:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:14.173 13:49:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:14.430 null4 00:10:14.430 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:14.430 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:14.430 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:14.688 null5 00:10:14.688 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:14.688 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:14.688 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:14.946 null6 00:10:14.946 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:14.946 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:14.946 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:15.205 null7 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3682436 3682437 3682439 3682441 3682444 3682447 3682449 3682452 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.205 13:49:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:15.463 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:15.463 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.463 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:15.463 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:15.463 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:15.463 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:15.463 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:15.463 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:15.721 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.721 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.721 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:15.721 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.721 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.721 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.721 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.721 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:15.721 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:15.721 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.721 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.721 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:15.721 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.721 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.721 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.721 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.721 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:15.721 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:15.721 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.721 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.721 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:15.721 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:15.721 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:15.721 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:15.979 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.979 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:15.979 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:15.979 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:15.979 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:15.979 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:15.979 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:15.979 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:16.237 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.237 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.237 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:16.237 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.237 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.237 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:16.237 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.237 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.237 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.237 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:16.237 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.237 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:16.237 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.237 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.237 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.237 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.237 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:16.237 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:16.237 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.237 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.237 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:16.237 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.237 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.237 13:49:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:16.495 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:16.495 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.495 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:16.495 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:16.495 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:16.495 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:16.495 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:16.495 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:16.753 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.753 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.753 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:16.753 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.753 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.753 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:16.753 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.753 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.753 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.753 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.753 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:16.753 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:16.753 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.753 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.753 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:16.753 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.753 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.753 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.753 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.753 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:16.753 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:16.753 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:16.753 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:16.753 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:17.012 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:17.012 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:17.012 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.012 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:17.012 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:17.012 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:17.012 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:17.012 13:49:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:17.269 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.269 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.269 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:17.269 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.269 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.269 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:17.269 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.269 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.269 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:17.269 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.269 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.269 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:17.269 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.269 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.269 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:17.269 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.269 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.269 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:17.269 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.269 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.269 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:17.269 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.269 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.269 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:17.527 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:17.527 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:17.527 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:17.527 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:17.527 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:17.527 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:17.527 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.527 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:17.784 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.784 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.784 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:17.784 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.784 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.784 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:17.784 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.784 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.784 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:17.784 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.784 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.784 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:17.784 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.784 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.784 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:17.784 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.784 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.784 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:17.784 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.785 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.785 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:17.785 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:17.785 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:17.785 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:18.090 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:18.090 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:18.090 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:18.090 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:18.090 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:18.091 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:18.091 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.091 13:49:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:18.371 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.371 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.371 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:18.371 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.371 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.371 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:18.371 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.371 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.371 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:18.371 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.371 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.371 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:18.371 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.371 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.371 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:18.371 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.371 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.371 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:18.371 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.371 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.371 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:18.371 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.371 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.371 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:18.628 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:18.628 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:18.628 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:18.628 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.628 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:18.628 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:18.628 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:18.628 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:18.886 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.886 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.886 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:18.886 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.886 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.886 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:18.886 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.886 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.886 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:18.886 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.886 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.886 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:18.886 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.886 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.886 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.886 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.886 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:18.886 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:18.886 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.886 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.886 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:18.886 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:18.886 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:18.886 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:19.144 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:19.144 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:19.144 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:19.402 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:19.402 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:19.402 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:19.402 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.402 13:49:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:19.660 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.660 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.660 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:19.660 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.660 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.660 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:19.660 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.660 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.660 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:19.660 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.660 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.660 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:19.660 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.660 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.660 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:19.660 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.660 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.660 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:19.660 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.660 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.660 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:19.660 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:19.660 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:19.660 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:19.918 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:19.918 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:19.918 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:19.918 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:19.918 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.918 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:19.918 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:19.918 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:20.176 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.176 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.176 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:20.176 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.176 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.176 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:20.176 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.176 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.176 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:20.176 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.176 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.176 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:20.176 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.176 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.176 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:20.176 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.176 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.176 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:20.176 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.176 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.176 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:20.176 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.176 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.176 13:49:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:20.434 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:20.434 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:20.434 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:20.434 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.434 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:20.434 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:20.434 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:20.434 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:20.692 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.692 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.692 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.692 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.692 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.692 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.692 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.692 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.692 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.692 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.692 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.692 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.692 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.692 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.692 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:20.692 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:20.692 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:20.692 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:20.692 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:20.693 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:20.693 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:20.693 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:20.693 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:20.693 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:20.693 rmmod nvme_tcp 00:10:20.693 rmmod nvme_fabrics 00:10:20.693 rmmod nvme_keyring 00:10:20.693 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:20.693 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:20.693 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:20.693 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3678067 ']' 00:10:20.693 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3678067 00:10:20.693 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 3678067 ']' 00:10:20.693 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 3678067 00:10:20.693 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:10:20.693 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:20.693 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3678067 00:10:20.693 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:20.693 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:20.693 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3678067' 00:10:20.693 killing process with pid 3678067 00:10:20.693 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 3678067 00:10:20.693 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 3678067 00:10:20.952 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:20.952 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:20.952 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:20.952 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:20.952 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:20.952 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.952 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:20.952 13:49:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.490 13:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:23.490 00:10:23.490 real 0m46.584s 00:10:23.490 user 3m31.346s 00:10:23.490 sys 0m16.775s 00:10:23.490 13:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:23.490 13:49:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.490 ************************************ 00:10:23.490 END TEST nvmf_ns_hotplug_stress 00:10:23.490 ************************************ 00:10:23.490 13:49:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:23.490 13:49:17 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:23.490 13:49:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:23.490 13:49:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:23.490 13:49:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:23.490 ************************************ 00:10:23.490 START TEST nvmf_connect_stress 00:10:23.490 ************************************ 00:10:23.490 13:49:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:23.490 * Looking for test storage... 00:10:23.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:23.491 13:49:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:25.394 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:25.394 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:25.394 Found net devices under 0000:84:00.0: cvl_0_0 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:25.394 Found net devices under 0000:84:00.1: cvl_0_1 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:25.394 13:49:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:25.394 13:49:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:25.394 13:49:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:25.394 13:49:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:25.394 13:49:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:25.394 13:49:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:25.394 13:49:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:25.394 13:49:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:25.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:25.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:10:25.394 00:10:25.394 --- 10.0.0.2 ping statistics --- 00:10:25.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.394 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:10:25.394 13:49:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:25.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:25.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:10:25.394 00:10:25.394 --- 10.0.0.1 ping statistics --- 00:10:25.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.394 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:10:25.394 13:49:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:25.394 13:49:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:25.394 13:49:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:25.394 13:49:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:25.394 13:49:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:25.394 13:49:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:25.394 13:49:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:25.394 13:49:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:25.394 13:49:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:25.394 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:25.394 13:49:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:25.394 13:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:25.394 13:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.394 13:49:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3685338 00:10:25.394 13:49:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:25.394 13:49:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3685338 00:10:25.394 13:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 3685338 ']' 00:10:25.394 13:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.394 13:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:25.395 13:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.395 13:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:25.395 13:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.395 [2024-07-15 13:49:20.189319] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:10:25.395 [2024-07-15 13:49:20.189416] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.395 EAL: No free 2048 kB hugepages reported on node 1 00:10:25.653 [2024-07-15 13:49:20.255294] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:25.653 [2024-07-15 13:49:20.355144] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:25.653 [2024-07-15 13:49:20.355200] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:25.653 [2024-07-15 13:49:20.355224] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:25.653 [2024-07-15 13:49:20.355234] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:25.653 [2024-07-15 13:49:20.355250] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:25.653 [2024-07-15 13:49:20.355383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:25.653 [2024-07-15 13:49:20.355497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:25.653 [2024-07-15 13:49:20.355501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.653 13:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:25.653 13:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:10:25.653 13:49:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:25.653 13:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:25.653 13:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.911 [2024-07-15 13:49:20.503931] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.911 [2024-07-15 13:49:20.529897] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.911 NULL1 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3685359 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:25.911 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:25.912 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:25.912 EAL: No free 2048 kB hugepages reported on node 1 00:10:25.912 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:25.912 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:25.912 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:25.912 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:25.912 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:25.912 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:25.912 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:25.912 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:25.912 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:25.912 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:25.912 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:25.912 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:25.912 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:25.912 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:25.912 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:25.912 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:25.912 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:25.912 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:25.912 13:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.912 13:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.169 13:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.169 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:26.169 13:49:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.169 13:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.169 13:49:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.427 13:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.427 13:49:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:26.427 13:49:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.427 13:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.427 13:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.991 13:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.991 13:49:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:26.991 13:49:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.991 13:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.991 13:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.248 13:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.248 13:49:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:27.248 13:49:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.248 13:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.248 13:49:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.505 13:49:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.505 13:49:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:27.505 13:49:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.505 13:49:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.505 13:49:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.762 13:49:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.762 13:49:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:27.762 13:49:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.762 13:49:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.762 13:49:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.019 13:49:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.019 13:49:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:28.019 13:49:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:28.019 13:49:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.019 13:49:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.584 13:49:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.584 13:49:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:28.584 13:49:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:28.584 13:49:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.584 13:49:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.841 13:49:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.841 13:49:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:28.841 13:49:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:28.841 13:49:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.841 13:49:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.097 13:49:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.097 13:49:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:29.097 13:49:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:29.097 13:49:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.097 13:49:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.354 13:49:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.354 13:49:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:29.354 13:49:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:29.354 13:49:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.354 13:49:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.610 13:49:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.611 13:49:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:29.611 13:49:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:29.611 13:49:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.611 13:49:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.174 13:49:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.174 13:49:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:30.174 13:49:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:30.174 13:49:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.174 13:49:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.431 13:49:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.431 13:49:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:30.431 13:49:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:30.431 13:49:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.431 13:49:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.688 13:49:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.688 13:49:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:30.688 13:49:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:30.688 13:49:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.688 13:49:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.944 13:49:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.944 13:49:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:30.944 13:49:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:30.944 13:49:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.944 13:49:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:31.506 13:49:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.506 13:49:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:31.506 13:49:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:31.506 13:49:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.506 13:49:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:31.763 13:49:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.763 13:49:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:31.763 13:49:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:31.763 13:49:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.763 13:49:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:32.020 13:49:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.020 13:49:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:32.020 13:49:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:32.020 13:49:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.020 13:49:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:32.277 13:49:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.277 13:49:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:32.277 13:49:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:32.277 13:49:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.277 13:49:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:32.534 13:49:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.534 13:49:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:32.534 13:49:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:32.534 13:49:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.534 13:49:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:33.098 13:49:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.098 13:49:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:33.098 13:49:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:33.098 13:49:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.098 13:49:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:33.355 13:49:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.355 13:49:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:33.355 13:49:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:33.355 13:49:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.355 13:49:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:33.612 13:49:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.612 13:49:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:33.612 13:49:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:33.612 13:49:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.612 13:49:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:33.869 13:49:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.869 13:49:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:33.869 13:49:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:33.869 13:49:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.869 13:49:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:34.126 13:49:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.126 13:49:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:34.126 13:49:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:34.126 13:49:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.126 13:49:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:34.692 13:49:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.692 13:49:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:34.692 13:49:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:34.692 13:49:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.692 13:49:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:34.950 13:49:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.950 13:49:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:34.950 13:49:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:34.950 13:49:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.950 13:49:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:35.208 13:49:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.208 13:49:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:35.208 13:49:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:35.208 13:49:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.208 13:49:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:35.466 13:49:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.466 13:49:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:35.466 13:49:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:35.466 13:49:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.466 13:49:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:35.724 13:49:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.724 13:49:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:35.724 13:49:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:35.724 13:49:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.724 13:49:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:35.982 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:36.240 13:49:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.240 13:49:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3685359 00:10:36.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3685359) - No such process 00:10:36.240 13:49:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3685359 00:10:36.240 13:49:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:36.240 13:49:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:36.240 13:49:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:36.240 13:49:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:36.240 13:49:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:36.240 13:49:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:36.240 13:49:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:36.240 13:49:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:36.240 13:49:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:36.240 rmmod nvme_tcp 00:10:36.240 rmmod nvme_fabrics 00:10:36.240 rmmod nvme_keyring 00:10:36.240 13:49:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:36.240 13:49:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:36.240 13:49:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:36.240 13:49:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3685338 ']' 00:10:36.240 13:49:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3685338 00:10:36.240 13:49:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 3685338 ']' 00:10:36.240 13:49:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 3685338 00:10:36.240 13:49:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:10:36.240 13:49:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:36.240 13:49:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3685338 00:10:36.240 13:49:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:36.240 13:49:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:36.240 13:49:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3685338' 00:10:36.240 killing process with pid 3685338 00:10:36.240 13:49:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 3685338 00:10:36.240 13:49:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 3685338 00:10:36.498 13:49:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:36.498 13:49:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:36.498 13:49:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:36.498 13:49:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:36.498 13:49:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:36.498 13:49:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.498 13:49:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:36.498 13:49:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.401 13:49:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:38.402 00:10:38.402 real 0m15.381s 00:10:38.402 user 0m37.778s 00:10:38.402 sys 0m6.431s 00:10:38.658 13:49:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:38.659 13:49:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:38.659 ************************************ 00:10:38.659 END TEST nvmf_connect_stress 00:10:38.659 ************************************ 00:10:38.659 13:49:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:38.659 13:49:33 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:38.659 13:49:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:38.659 13:49:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:38.659 13:49:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:38.659 ************************************ 00:10:38.659 START TEST nvmf_fused_ordering 00:10:38.659 ************************************ 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:38.659 * Looking for test storage... 00:10:38.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:10:38.659 13:49:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:41.198 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:41.198 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:41.198 Found net devices under 0000:84:00.0: cvl_0_0 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:41.198 Found net devices under 0000:84:00.1: cvl_0_1 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:41.198 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:41.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:10:41.199 00:10:41.199 --- 10.0.0.2 ping statistics --- 00:10:41.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.199 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:41.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:10:41.199 00:10:41.199 --- 10.0.0.1 ping statistics --- 00:10:41.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.199 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3688529 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3688529 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 3688529 ']' 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:41.199 [2024-07-15 13:49:35.644874] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:10:41.199 [2024-07-15 13:49:35.644961] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.199 EAL: No free 2048 kB hugepages reported on node 1 00:10:41.199 [2024-07-15 13:49:35.710372] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.199 [2024-07-15 13:49:35.820393] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.199 [2024-07-15 13:49:35.820455] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.199 [2024-07-15 13:49:35.820476] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.199 [2024-07-15 13:49:35.820487] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.199 [2024-07-15 13:49:35.820497] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.199 [2024-07-15 13:49:35.820524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:41.199 [2024-07-15 13:49:35.968458] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:41.199 [2024-07-15 13:49:35.984623] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:41.199 NULL1 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.199 13:49:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:41.199 13:49:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.199 13:49:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:41.199 13:49:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.199 13:49:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:41.199 13:49:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.199 13:49:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:41.199 [2024-07-15 13:49:36.028226] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:10:41.199 [2024-07-15 13:49:36.028267] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3688666 ] 00:10:41.480 EAL: No free 2048 kB hugepages reported on node 1 00:10:41.748 Attached to nqn.2016-06.io.spdk:cnode1 00:10:41.748 Namespace ID: 1 size: 1GB 00:10:41.748 fused_ordering(0) 00:10:41.748 fused_ordering(1) 00:10:41.748 fused_ordering(2) 00:10:41.748 fused_ordering(3) 00:10:41.748 fused_ordering(4) 00:10:41.748 fused_ordering(5) 00:10:41.748 fused_ordering(6) 00:10:41.748 fused_ordering(7) 00:10:41.748 fused_ordering(8) 00:10:41.748 fused_ordering(9) 00:10:41.748 fused_ordering(10) 00:10:41.748 fused_ordering(11) 00:10:41.748 fused_ordering(12) 00:10:41.748 fused_ordering(13) 00:10:41.748 fused_ordering(14) 00:10:41.748 fused_ordering(15) 00:10:41.748 fused_ordering(16) 00:10:41.748 fused_ordering(17) 00:10:41.748 fused_ordering(18) 00:10:41.748 fused_ordering(19) 00:10:41.748 fused_ordering(20) 00:10:41.748 fused_ordering(21) 00:10:41.748 fused_ordering(22) 00:10:41.748 fused_ordering(23) 00:10:41.748 fused_ordering(24) 00:10:41.748 fused_ordering(25) 00:10:41.748 fused_ordering(26) 00:10:41.748 fused_ordering(27) 00:10:41.748 fused_ordering(28) 00:10:41.748 fused_ordering(29) 00:10:41.748 fused_ordering(30) 00:10:41.748 fused_ordering(31) 00:10:41.748 fused_ordering(32) 00:10:41.748 fused_ordering(33) 00:10:41.748 fused_ordering(34) 00:10:41.748 fused_ordering(35) 00:10:41.748 fused_ordering(36) 00:10:41.748 fused_ordering(37) 00:10:41.748 fused_ordering(38) 00:10:41.748 fused_ordering(39) 00:10:41.748 fused_ordering(40) 00:10:41.748 fused_ordering(41) 00:10:41.748 fused_ordering(42) 00:10:41.748 fused_ordering(43) 00:10:41.748 fused_ordering(44) 00:10:41.748 fused_ordering(45) 00:10:41.748 fused_ordering(46) 00:10:41.748 fused_ordering(47) 00:10:41.748 fused_ordering(48) 00:10:41.748 fused_ordering(49) 00:10:41.748 fused_ordering(50) 00:10:41.748 fused_ordering(51) 00:10:41.748 fused_ordering(52) 00:10:41.748 fused_ordering(53) 00:10:41.748 fused_ordering(54) 00:10:41.748 fused_ordering(55) 00:10:41.748 fused_ordering(56) 00:10:41.748 fused_ordering(57) 00:10:41.748 fused_ordering(58) 00:10:41.748 fused_ordering(59) 00:10:41.748 fused_ordering(60) 00:10:41.748 fused_ordering(61) 00:10:41.748 fused_ordering(62) 00:10:41.748 fused_ordering(63) 00:10:41.748 fused_ordering(64) 00:10:41.748 fused_ordering(65) 00:10:41.748 fused_ordering(66) 00:10:41.748 fused_ordering(67) 00:10:41.748 fused_ordering(68) 00:10:41.748 fused_ordering(69) 00:10:41.748 fused_ordering(70) 00:10:41.748 fused_ordering(71) 00:10:41.748 fused_ordering(72) 00:10:41.748 fused_ordering(73) 00:10:41.748 fused_ordering(74) 00:10:41.748 fused_ordering(75) 00:10:41.748 fused_ordering(76) 00:10:41.748 fused_ordering(77) 00:10:41.748 fused_ordering(78) 00:10:41.748 fused_ordering(79) 00:10:41.748 fused_ordering(80) 00:10:41.748 fused_ordering(81) 00:10:41.748 fused_ordering(82) 00:10:41.748 fused_ordering(83) 00:10:41.748 fused_ordering(84) 00:10:41.748 fused_ordering(85) 00:10:41.748 fused_ordering(86) 00:10:41.748 fused_ordering(87) 00:10:41.748 fused_ordering(88) 00:10:41.748 fused_ordering(89) 00:10:41.748 fused_ordering(90) 00:10:41.748 fused_ordering(91) 00:10:41.748 fused_ordering(92) 00:10:41.748 fused_ordering(93) 00:10:41.748 fused_ordering(94) 00:10:41.748 fused_ordering(95) 00:10:41.748 fused_ordering(96) 00:10:41.748 fused_ordering(97) 00:10:41.748 fused_ordering(98) 00:10:41.748 fused_ordering(99) 00:10:41.748 fused_ordering(100) 00:10:41.748 fused_ordering(101) 00:10:41.748 fused_ordering(102) 00:10:41.748 fused_ordering(103) 00:10:41.748 fused_ordering(104) 00:10:41.748 fused_ordering(105) 00:10:41.748 fused_ordering(106) 00:10:41.748 fused_ordering(107) 00:10:41.748 fused_ordering(108) 00:10:41.748 fused_ordering(109) 00:10:41.748 fused_ordering(110) 00:10:41.748 fused_ordering(111) 00:10:41.748 fused_ordering(112) 00:10:41.748 fused_ordering(113) 00:10:41.748 fused_ordering(114) 00:10:41.748 fused_ordering(115) 00:10:41.748 fused_ordering(116) 00:10:41.748 fused_ordering(117) 00:10:41.748 fused_ordering(118) 00:10:41.748 fused_ordering(119) 00:10:41.749 fused_ordering(120) 00:10:41.749 fused_ordering(121) 00:10:41.749 fused_ordering(122) 00:10:41.749 fused_ordering(123) 00:10:41.749 fused_ordering(124) 00:10:41.749 fused_ordering(125) 00:10:41.749 fused_ordering(126) 00:10:41.749 fused_ordering(127) 00:10:41.749 fused_ordering(128) 00:10:41.749 fused_ordering(129) 00:10:41.749 fused_ordering(130) 00:10:41.749 fused_ordering(131) 00:10:41.749 fused_ordering(132) 00:10:41.749 fused_ordering(133) 00:10:41.749 fused_ordering(134) 00:10:41.749 fused_ordering(135) 00:10:41.749 fused_ordering(136) 00:10:41.749 fused_ordering(137) 00:10:41.749 fused_ordering(138) 00:10:41.749 fused_ordering(139) 00:10:41.749 fused_ordering(140) 00:10:41.749 fused_ordering(141) 00:10:41.749 fused_ordering(142) 00:10:41.749 fused_ordering(143) 00:10:41.749 fused_ordering(144) 00:10:41.749 fused_ordering(145) 00:10:41.749 fused_ordering(146) 00:10:41.749 fused_ordering(147) 00:10:41.749 fused_ordering(148) 00:10:41.749 fused_ordering(149) 00:10:41.749 fused_ordering(150) 00:10:41.749 fused_ordering(151) 00:10:41.749 fused_ordering(152) 00:10:41.749 fused_ordering(153) 00:10:41.749 fused_ordering(154) 00:10:41.749 fused_ordering(155) 00:10:41.749 fused_ordering(156) 00:10:41.749 fused_ordering(157) 00:10:41.749 fused_ordering(158) 00:10:41.749 fused_ordering(159) 00:10:41.749 fused_ordering(160) 00:10:41.749 fused_ordering(161) 00:10:41.749 fused_ordering(162) 00:10:41.749 fused_ordering(163) 00:10:41.749 fused_ordering(164) 00:10:41.749 fused_ordering(165) 00:10:41.749 fused_ordering(166) 00:10:41.749 fused_ordering(167) 00:10:41.749 fused_ordering(168) 00:10:41.749 fused_ordering(169) 00:10:41.749 fused_ordering(170) 00:10:41.749 fused_ordering(171) 00:10:41.749 fused_ordering(172) 00:10:41.749 fused_ordering(173) 00:10:41.749 fused_ordering(174) 00:10:41.749 fused_ordering(175) 00:10:41.749 fused_ordering(176) 00:10:41.749 fused_ordering(177) 00:10:41.749 fused_ordering(178) 00:10:41.749 fused_ordering(179) 00:10:41.749 fused_ordering(180) 00:10:41.749 fused_ordering(181) 00:10:41.749 fused_ordering(182) 00:10:41.749 fused_ordering(183) 00:10:41.749 fused_ordering(184) 00:10:41.749 fused_ordering(185) 00:10:41.749 fused_ordering(186) 00:10:41.749 fused_ordering(187) 00:10:41.749 fused_ordering(188) 00:10:41.749 fused_ordering(189) 00:10:41.749 fused_ordering(190) 00:10:41.749 fused_ordering(191) 00:10:41.749 fused_ordering(192) 00:10:41.749 fused_ordering(193) 00:10:41.749 fused_ordering(194) 00:10:41.749 fused_ordering(195) 00:10:41.749 fused_ordering(196) 00:10:41.749 fused_ordering(197) 00:10:41.749 fused_ordering(198) 00:10:41.749 fused_ordering(199) 00:10:41.749 fused_ordering(200) 00:10:41.749 fused_ordering(201) 00:10:41.749 fused_ordering(202) 00:10:41.749 fused_ordering(203) 00:10:41.749 fused_ordering(204) 00:10:41.749 fused_ordering(205) 00:10:42.006 fused_ordering(206) 00:10:42.006 fused_ordering(207) 00:10:42.006 fused_ordering(208) 00:10:42.006 fused_ordering(209) 00:10:42.006 fused_ordering(210) 00:10:42.006 fused_ordering(211) 00:10:42.006 fused_ordering(212) 00:10:42.006 fused_ordering(213) 00:10:42.006 fused_ordering(214) 00:10:42.006 fused_ordering(215) 00:10:42.006 fused_ordering(216) 00:10:42.006 fused_ordering(217) 00:10:42.006 fused_ordering(218) 00:10:42.006 fused_ordering(219) 00:10:42.006 fused_ordering(220) 00:10:42.006 fused_ordering(221) 00:10:42.006 fused_ordering(222) 00:10:42.006 fused_ordering(223) 00:10:42.006 fused_ordering(224) 00:10:42.006 fused_ordering(225) 00:10:42.006 fused_ordering(226) 00:10:42.006 fused_ordering(227) 00:10:42.006 fused_ordering(228) 00:10:42.006 fused_ordering(229) 00:10:42.006 fused_ordering(230) 00:10:42.006 fused_ordering(231) 00:10:42.006 fused_ordering(232) 00:10:42.006 fused_ordering(233) 00:10:42.006 fused_ordering(234) 00:10:42.006 fused_ordering(235) 00:10:42.006 fused_ordering(236) 00:10:42.006 fused_ordering(237) 00:10:42.006 fused_ordering(238) 00:10:42.006 fused_ordering(239) 00:10:42.006 fused_ordering(240) 00:10:42.006 fused_ordering(241) 00:10:42.006 fused_ordering(242) 00:10:42.006 fused_ordering(243) 00:10:42.006 fused_ordering(244) 00:10:42.006 fused_ordering(245) 00:10:42.006 fused_ordering(246) 00:10:42.006 fused_ordering(247) 00:10:42.006 fused_ordering(248) 00:10:42.006 fused_ordering(249) 00:10:42.006 fused_ordering(250) 00:10:42.006 fused_ordering(251) 00:10:42.006 fused_ordering(252) 00:10:42.006 fused_ordering(253) 00:10:42.006 fused_ordering(254) 00:10:42.006 fused_ordering(255) 00:10:42.006 fused_ordering(256) 00:10:42.006 fused_ordering(257) 00:10:42.006 fused_ordering(258) 00:10:42.006 fused_ordering(259) 00:10:42.006 fused_ordering(260) 00:10:42.006 fused_ordering(261) 00:10:42.006 fused_ordering(262) 00:10:42.006 fused_ordering(263) 00:10:42.006 fused_ordering(264) 00:10:42.006 fused_ordering(265) 00:10:42.006 fused_ordering(266) 00:10:42.006 fused_ordering(267) 00:10:42.006 fused_ordering(268) 00:10:42.006 fused_ordering(269) 00:10:42.006 fused_ordering(270) 00:10:42.006 fused_ordering(271) 00:10:42.006 fused_ordering(272) 00:10:42.006 fused_ordering(273) 00:10:42.006 fused_ordering(274) 00:10:42.006 fused_ordering(275) 00:10:42.006 fused_ordering(276) 00:10:42.006 fused_ordering(277) 00:10:42.006 fused_ordering(278) 00:10:42.006 fused_ordering(279) 00:10:42.006 fused_ordering(280) 00:10:42.006 fused_ordering(281) 00:10:42.006 fused_ordering(282) 00:10:42.006 fused_ordering(283) 00:10:42.006 fused_ordering(284) 00:10:42.006 fused_ordering(285) 00:10:42.006 fused_ordering(286) 00:10:42.006 fused_ordering(287) 00:10:42.006 fused_ordering(288) 00:10:42.006 fused_ordering(289) 00:10:42.006 fused_ordering(290) 00:10:42.006 fused_ordering(291) 00:10:42.006 fused_ordering(292) 00:10:42.006 fused_ordering(293) 00:10:42.006 fused_ordering(294) 00:10:42.006 fused_ordering(295) 00:10:42.006 fused_ordering(296) 00:10:42.006 fused_ordering(297) 00:10:42.006 fused_ordering(298) 00:10:42.006 fused_ordering(299) 00:10:42.006 fused_ordering(300) 00:10:42.006 fused_ordering(301) 00:10:42.006 fused_ordering(302) 00:10:42.006 fused_ordering(303) 00:10:42.006 fused_ordering(304) 00:10:42.006 fused_ordering(305) 00:10:42.006 fused_ordering(306) 00:10:42.006 fused_ordering(307) 00:10:42.006 fused_ordering(308) 00:10:42.006 fused_ordering(309) 00:10:42.006 fused_ordering(310) 00:10:42.006 fused_ordering(311) 00:10:42.006 fused_ordering(312) 00:10:42.006 fused_ordering(313) 00:10:42.006 fused_ordering(314) 00:10:42.006 fused_ordering(315) 00:10:42.006 fused_ordering(316) 00:10:42.006 fused_ordering(317) 00:10:42.006 fused_ordering(318) 00:10:42.006 fused_ordering(319) 00:10:42.006 fused_ordering(320) 00:10:42.006 fused_ordering(321) 00:10:42.006 fused_ordering(322) 00:10:42.006 fused_ordering(323) 00:10:42.006 fused_ordering(324) 00:10:42.006 fused_ordering(325) 00:10:42.006 fused_ordering(326) 00:10:42.006 fused_ordering(327) 00:10:42.006 fused_ordering(328) 00:10:42.006 fused_ordering(329) 00:10:42.006 fused_ordering(330) 00:10:42.006 fused_ordering(331) 00:10:42.006 fused_ordering(332) 00:10:42.006 fused_ordering(333) 00:10:42.006 fused_ordering(334) 00:10:42.006 fused_ordering(335) 00:10:42.006 fused_ordering(336) 00:10:42.006 fused_ordering(337) 00:10:42.006 fused_ordering(338) 00:10:42.006 fused_ordering(339) 00:10:42.006 fused_ordering(340) 00:10:42.006 fused_ordering(341) 00:10:42.006 fused_ordering(342) 00:10:42.006 fused_ordering(343) 00:10:42.006 fused_ordering(344) 00:10:42.006 fused_ordering(345) 00:10:42.006 fused_ordering(346) 00:10:42.006 fused_ordering(347) 00:10:42.006 fused_ordering(348) 00:10:42.006 fused_ordering(349) 00:10:42.006 fused_ordering(350) 00:10:42.006 fused_ordering(351) 00:10:42.006 fused_ordering(352) 00:10:42.006 fused_ordering(353) 00:10:42.006 fused_ordering(354) 00:10:42.006 fused_ordering(355) 00:10:42.006 fused_ordering(356) 00:10:42.006 fused_ordering(357) 00:10:42.006 fused_ordering(358) 00:10:42.006 fused_ordering(359) 00:10:42.006 fused_ordering(360) 00:10:42.006 fused_ordering(361) 00:10:42.006 fused_ordering(362) 00:10:42.006 fused_ordering(363) 00:10:42.006 fused_ordering(364) 00:10:42.006 fused_ordering(365) 00:10:42.006 fused_ordering(366) 00:10:42.006 fused_ordering(367) 00:10:42.006 fused_ordering(368) 00:10:42.006 fused_ordering(369) 00:10:42.006 fused_ordering(370) 00:10:42.006 fused_ordering(371) 00:10:42.006 fused_ordering(372) 00:10:42.006 fused_ordering(373) 00:10:42.006 fused_ordering(374) 00:10:42.006 fused_ordering(375) 00:10:42.006 fused_ordering(376) 00:10:42.006 fused_ordering(377) 00:10:42.006 fused_ordering(378) 00:10:42.006 fused_ordering(379) 00:10:42.006 fused_ordering(380) 00:10:42.006 fused_ordering(381) 00:10:42.006 fused_ordering(382) 00:10:42.006 fused_ordering(383) 00:10:42.006 fused_ordering(384) 00:10:42.006 fused_ordering(385) 00:10:42.006 fused_ordering(386) 00:10:42.006 fused_ordering(387) 00:10:42.006 fused_ordering(388) 00:10:42.006 fused_ordering(389) 00:10:42.006 fused_ordering(390) 00:10:42.006 fused_ordering(391) 00:10:42.006 fused_ordering(392) 00:10:42.006 fused_ordering(393) 00:10:42.006 fused_ordering(394) 00:10:42.006 fused_ordering(395) 00:10:42.006 fused_ordering(396) 00:10:42.006 fused_ordering(397) 00:10:42.006 fused_ordering(398) 00:10:42.006 fused_ordering(399) 00:10:42.006 fused_ordering(400) 00:10:42.006 fused_ordering(401) 00:10:42.006 fused_ordering(402) 00:10:42.006 fused_ordering(403) 00:10:42.006 fused_ordering(404) 00:10:42.006 fused_ordering(405) 00:10:42.006 fused_ordering(406) 00:10:42.006 fused_ordering(407) 00:10:42.006 fused_ordering(408) 00:10:42.006 fused_ordering(409) 00:10:42.006 fused_ordering(410) 00:10:42.569 fused_ordering(411) 00:10:42.569 fused_ordering(412) 00:10:42.569 fused_ordering(413) 00:10:42.569 fused_ordering(414) 00:10:42.569 fused_ordering(415) 00:10:42.569 fused_ordering(416) 00:10:42.569 fused_ordering(417) 00:10:42.569 fused_ordering(418) 00:10:42.569 fused_ordering(419) 00:10:42.569 fused_ordering(420) 00:10:42.569 fused_ordering(421) 00:10:42.569 fused_ordering(422) 00:10:42.569 fused_ordering(423) 00:10:42.569 fused_ordering(424) 00:10:42.569 fused_ordering(425) 00:10:42.569 fused_ordering(426) 00:10:42.569 fused_ordering(427) 00:10:42.569 fused_ordering(428) 00:10:42.569 fused_ordering(429) 00:10:42.569 fused_ordering(430) 00:10:42.569 fused_ordering(431) 00:10:42.569 fused_ordering(432) 00:10:42.569 fused_ordering(433) 00:10:42.569 fused_ordering(434) 00:10:42.569 fused_ordering(435) 00:10:42.569 fused_ordering(436) 00:10:42.569 fused_ordering(437) 00:10:42.569 fused_ordering(438) 00:10:42.569 fused_ordering(439) 00:10:42.569 fused_ordering(440) 00:10:42.569 fused_ordering(441) 00:10:42.569 fused_ordering(442) 00:10:42.569 fused_ordering(443) 00:10:42.569 fused_ordering(444) 00:10:42.569 fused_ordering(445) 00:10:42.569 fused_ordering(446) 00:10:42.569 fused_ordering(447) 00:10:42.569 fused_ordering(448) 00:10:42.569 fused_ordering(449) 00:10:42.569 fused_ordering(450) 00:10:42.569 fused_ordering(451) 00:10:42.569 fused_ordering(452) 00:10:42.569 fused_ordering(453) 00:10:42.569 fused_ordering(454) 00:10:42.569 fused_ordering(455) 00:10:42.569 fused_ordering(456) 00:10:42.569 fused_ordering(457) 00:10:42.569 fused_ordering(458) 00:10:42.569 fused_ordering(459) 00:10:42.569 fused_ordering(460) 00:10:42.569 fused_ordering(461) 00:10:42.569 fused_ordering(462) 00:10:42.569 fused_ordering(463) 00:10:42.569 fused_ordering(464) 00:10:42.569 fused_ordering(465) 00:10:42.569 fused_ordering(466) 00:10:42.569 fused_ordering(467) 00:10:42.569 fused_ordering(468) 00:10:42.569 fused_ordering(469) 00:10:42.569 fused_ordering(470) 00:10:42.569 fused_ordering(471) 00:10:42.569 fused_ordering(472) 00:10:42.569 fused_ordering(473) 00:10:42.569 fused_ordering(474) 00:10:42.569 fused_ordering(475) 00:10:42.569 fused_ordering(476) 00:10:42.569 fused_ordering(477) 00:10:42.569 fused_ordering(478) 00:10:42.569 fused_ordering(479) 00:10:42.569 fused_ordering(480) 00:10:42.569 fused_ordering(481) 00:10:42.569 fused_ordering(482) 00:10:42.569 fused_ordering(483) 00:10:42.569 fused_ordering(484) 00:10:42.569 fused_ordering(485) 00:10:42.569 fused_ordering(486) 00:10:42.569 fused_ordering(487) 00:10:42.569 fused_ordering(488) 00:10:42.569 fused_ordering(489) 00:10:42.569 fused_ordering(490) 00:10:42.569 fused_ordering(491) 00:10:42.569 fused_ordering(492) 00:10:42.569 fused_ordering(493) 00:10:42.569 fused_ordering(494) 00:10:42.569 fused_ordering(495) 00:10:42.569 fused_ordering(496) 00:10:42.569 fused_ordering(497) 00:10:42.569 fused_ordering(498) 00:10:42.569 fused_ordering(499) 00:10:42.569 fused_ordering(500) 00:10:42.569 fused_ordering(501) 00:10:42.569 fused_ordering(502) 00:10:42.569 fused_ordering(503) 00:10:42.569 fused_ordering(504) 00:10:42.569 fused_ordering(505) 00:10:42.569 fused_ordering(506) 00:10:42.569 fused_ordering(507) 00:10:42.569 fused_ordering(508) 00:10:42.569 fused_ordering(509) 00:10:42.569 fused_ordering(510) 00:10:42.569 fused_ordering(511) 00:10:42.569 fused_ordering(512) 00:10:42.569 fused_ordering(513) 00:10:42.569 fused_ordering(514) 00:10:42.569 fused_ordering(515) 00:10:42.569 fused_ordering(516) 00:10:42.569 fused_ordering(517) 00:10:42.569 fused_ordering(518) 00:10:42.569 fused_ordering(519) 00:10:42.569 fused_ordering(520) 00:10:42.569 fused_ordering(521) 00:10:42.569 fused_ordering(522) 00:10:42.569 fused_ordering(523) 00:10:42.569 fused_ordering(524) 00:10:42.569 fused_ordering(525) 00:10:42.569 fused_ordering(526) 00:10:42.569 fused_ordering(527) 00:10:42.569 fused_ordering(528) 00:10:42.569 fused_ordering(529) 00:10:42.569 fused_ordering(530) 00:10:42.569 fused_ordering(531) 00:10:42.569 fused_ordering(532) 00:10:42.569 fused_ordering(533) 00:10:42.569 fused_ordering(534) 00:10:42.569 fused_ordering(535) 00:10:42.569 fused_ordering(536) 00:10:42.569 fused_ordering(537) 00:10:42.569 fused_ordering(538) 00:10:42.569 fused_ordering(539) 00:10:42.569 fused_ordering(540) 00:10:42.569 fused_ordering(541) 00:10:42.569 fused_ordering(542) 00:10:42.569 fused_ordering(543) 00:10:42.569 fused_ordering(544) 00:10:42.569 fused_ordering(545) 00:10:42.569 fused_ordering(546) 00:10:42.569 fused_ordering(547) 00:10:42.569 fused_ordering(548) 00:10:42.569 fused_ordering(549) 00:10:42.569 fused_ordering(550) 00:10:42.569 fused_ordering(551) 00:10:42.569 fused_ordering(552) 00:10:42.569 fused_ordering(553) 00:10:42.569 fused_ordering(554) 00:10:42.569 fused_ordering(555) 00:10:42.569 fused_ordering(556) 00:10:42.569 fused_ordering(557) 00:10:42.569 fused_ordering(558) 00:10:42.569 fused_ordering(559) 00:10:42.569 fused_ordering(560) 00:10:42.569 fused_ordering(561) 00:10:42.569 fused_ordering(562) 00:10:42.569 fused_ordering(563) 00:10:42.569 fused_ordering(564) 00:10:42.569 fused_ordering(565) 00:10:42.569 fused_ordering(566) 00:10:42.569 fused_ordering(567) 00:10:42.569 fused_ordering(568) 00:10:42.569 fused_ordering(569) 00:10:42.569 fused_ordering(570) 00:10:42.569 fused_ordering(571) 00:10:42.569 fused_ordering(572) 00:10:42.569 fused_ordering(573) 00:10:42.569 fused_ordering(574) 00:10:42.569 fused_ordering(575) 00:10:42.569 fused_ordering(576) 00:10:42.569 fused_ordering(577) 00:10:42.569 fused_ordering(578) 00:10:42.569 fused_ordering(579) 00:10:42.569 fused_ordering(580) 00:10:42.569 fused_ordering(581) 00:10:42.569 fused_ordering(582) 00:10:42.569 fused_ordering(583) 00:10:42.569 fused_ordering(584) 00:10:42.569 fused_ordering(585) 00:10:42.569 fused_ordering(586) 00:10:42.569 fused_ordering(587) 00:10:42.569 fused_ordering(588) 00:10:42.569 fused_ordering(589) 00:10:42.569 fused_ordering(590) 00:10:42.569 fused_ordering(591) 00:10:42.569 fused_ordering(592) 00:10:42.569 fused_ordering(593) 00:10:42.569 fused_ordering(594) 00:10:42.569 fused_ordering(595) 00:10:42.569 fused_ordering(596) 00:10:42.569 fused_ordering(597) 00:10:42.569 fused_ordering(598) 00:10:42.569 fused_ordering(599) 00:10:42.569 fused_ordering(600) 00:10:42.569 fused_ordering(601) 00:10:42.569 fused_ordering(602) 00:10:42.569 fused_ordering(603) 00:10:42.569 fused_ordering(604) 00:10:42.569 fused_ordering(605) 00:10:42.569 fused_ordering(606) 00:10:42.569 fused_ordering(607) 00:10:42.569 fused_ordering(608) 00:10:42.569 fused_ordering(609) 00:10:42.569 fused_ordering(610) 00:10:42.569 fused_ordering(611) 00:10:42.569 fused_ordering(612) 00:10:42.569 fused_ordering(613) 00:10:42.569 fused_ordering(614) 00:10:42.569 fused_ordering(615) 00:10:43.135 fused_ordering(616) 00:10:43.135 fused_ordering(617) 00:10:43.135 fused_ordering(618) 00:10:43.135 fused_ordering(619) 00:10:43.135 fused_ordering(620) 00:10:43.135 fused_ordering(621) 00:10:43.135 fused_ordering(622) 00:10:43.135 fused_ordering(623) 00:10:43.135 fused_ordering(624) 00:10:43.135 fused_ordering(625) 00:10:43.135 fused_ordering(626) 00:10:43.135 fused_ordering(627) 00:10:43.135 fused_ordering(628) 00:10:43.135 fused_ordering(629) 00:10:43.135 fused_ordering(630) 00:10:43.135 fused_ordering(631) 00:10:43.135 fused_ordering(632) 00:10:43.135 fused_ordering(633) 00:10:43.135 fused_ordering(634) 00:10:43.135 fused_ordering(635) 00:10:43.135 fused_ordering(636) 00:10:43.135 fused_ordering(637) 00:10:43.135 fused_ordering(638) 00:10:43.135 fused_ordering(639) 00:10:43.135 fused_ordering(640) 00:10:43.135 fused_ordering(641) 00:10:43.135 fused_ordering(642) 00:10:43.135 fused_ordering(643) 00:10:43.135 fused_ordering(644) 00:10:43.135 fused_ordering(645) 00:10:43.135 fused_ordering(646) 00:10:43.135 fused_ordering(647) 00:10:43.135 fused_ordering(648) 00:10:43.135 fused_ordering(649) 00:10:43.135 fused_ordering(650) 00:10:43.135 fused_ordering(651) 00:10:43.135 fused_ordering(652) 00:10:43.135 fused_ordering(653) 00:10:43.135 fused_ordering(654) 00:10:43.135 fused_ordering(655) 00:10:43.135 fused_ordering(656) 00:10:43.135 fused_ordering(657) 00:10:43.135 fused_ordering(658) 00:10:43.135 fused_ordering(659) 00:10:43.135 fused_ordering(660) 00:10:43.135 fused_ordering(661) 00:10:43.135 fused_ordering(662) 00:10:43.135 fused_ordering(663) 00:10:43.135 fused_ordering(664) 00:10:43.135 fused_ordering(665) 00:10:43.135 fused_ordering(666) 00:10:43.135 fused_ordering(667) 00:10:43.135 fused_ordering(668) 00:10:43.135 fused_ordering(669) 00:10:43.135 fused_ordering(670) 00:10:43.135 fused_ordering(671) 00:10:43.135 fused_ordering(672) 00:10:43.135 fused_ordering(673) 00:10:43.135 fused_ordering(674) 00:10:43.135 fused_ordering(675) 00:10:43.135 fused_ordering(676) 00:10:43.135 fused_ordering(677) 00:10:43.135 fused_ordering(678) 00:10:43.135 fused_ordering(679) 00:10:43.135 fused_ordering(680) 00:10:43.135 fused_ordering(681) 00:10:43.135 fused_ordering(682) 00:10:43.135 fused_ordering(683) 00:10:43.135 fused_ordering(684) 00:10:43.135 fused_ordering(685) 00:10:43.135 fused_ordering(686) 00:10:43.135 fused_ordering(687) 00:10:43.135 fused_ordering(688) 00:10:43.135 fused_ordering(689) 00:10:43.135 fused_ordering(690) 00:10:43.135 fused_ordering(691) 00:10:43.135 fused_ordering(692) 00:10:43.135 fused_ordering(693) 00:10:43.135 fused_ordering(694) 00:10:43.135 fused_ordering(695) 00:10:43.135 fused_ordering(696) 00:10:43.135 fused_ordering(697) 00:10:43.135 fused_ordering(698) 00:10:43.135 fused_ordering(699) 00:10:43.135 fused_ordering(700) 00:10:43.135 fused_ordering(701) 00:10:43.135 fused_ordering(702) 00:10:43.135 fused_ordering(703) 00:10:43.135 fused_ordering(704) 00:10:43.135 fused_ordering(705) 00:10:43.135 fused_ordering(706) 00:10:43.135 fused_ordering(707) 00:10:43.135 fused_ordering(708) 00:10:43.135 fused_ordering(709) 00:10:43.135 fused_ordering(710) 00:10:43.135 fused_ordering(711) 00:10:43.135 fused_ordering(712) 00:10:43.135 fused_ordering(713) 00:10:43.135 fused_ordering(714) 00:10:43.135 fused_ordering(715) 00:10:43.135 fused_ordering(716) 00:10:43.135 fused_ordering(717) 00:10:43.135 fused_ordering(718) 00:10:43.135 fused_ordering(719) 00:10:43.135 fused_ordering(720) 00:10:43.135 fused_ordering(721) 00:10:43.135 fused_ordering(722) 00:10:43.135 fused_ordering(723) 00:10:43.135 fused_ordering(724) 00:10:43.135 fused_ordering(725) 00:10:43.135 fused_ordering(726) 00:10:43.135 fused_ordering(727) 00:10:43.135 fused_ordering(728) 00:10:43.135 fused_ordering(729) 00:10:43.135 fused_ordering(730) 00:10:43.135 fused_ordering(731) 00:10:43.135 fused_ordering(732) 00:10:43.135 fused_ordering(733) 00:10:43.135 fused_ordering(734) 00:10:43.135 fused_ordering(735) 00:10:43.135 fused_ordering(736) 00:10:43.135 fused_ordering(737) 00:10:43.135 fused_ordering(738) 00:10:43.135 fused_ordering(739) 00:10:43.135 fused_ordering(740) 00:10:43.135 fused_ordering(741) 00:10:43.135 fused_ordering(742) 00:10:43.135 fused_ordering(743) 00:10:43.135 fused_ordering(744) 00:10:43.135 fused_ordering(745) 00:10:43.135 fused_ordering(746) 00:10:43.135 fused_ordering(747) 00:10:43.135 fused_ordering(748) 00:10:43.135 fused_ordering(749) 00:10:43.135 fused_ordering(750) 00:10:43.135 fused_ordering(751) 00:10:43.135 fused_ordering(752) 00:10:43.135 fused_ordering(753) 00:10:43.135 fused_ordering(754) 00:10:43.135 fused_ordering(755) 00:10:43.135 fused_ordering(756) 00:10:43.135 fused_ordering(757) 00:10:43.135 fused_ordering(758) 00:10:43.135 fused_ordering(759) 00:10:43.135 fused_ordering(760) 00:10:43.135 fused_ordering(761) 00:10:43.135 fused_ordering(762) 00:10:43.135 fused_ordering(763) 00:10:43.135 fused_ordering(764) 00:10:43.135 fused_ordering(765) 00:10:43.135 fused_ordering(766) 00:10:43.135 fused_ordering(767) 00:10:43.135 fused_ordering(768) 00:10:43.135 fused_ordering(769) 00:10:43.135 fused_ordering(770) 00:10:43.135 fused_ordering(771) 00:10:43.135 fused_ordering(772) 00:10:43.135 fused_ordering(773) 00:10:43.135 fused_ordering(774) 00:10:43.135 fused_ordering(775) 00:10:43.135 fused_ordering(776) 00:10:43.135 fused_ordering(777) 00:10:43.135 fused_ordering(778) 00:10:43.135 fused_ordering(779) 00:10:43.135 fused_ordering(780) 00:10:43.135 fused_ordering(781) 00:10:43.135 fused_ordering(782) 00:10:43.135 fused_ordering(783) 00:10:43.135 fused_ordering(784) 00:10:43.135 fused_ordering(785) 00:10:43.135 fused_ordering(786) 00:10:43.135 fused_ordering(787) 00:10:43.135 fused_ordering(788) 00:10:43.135 fused_ordering(789) 00:10:43.135 fused_ordering(790) 00:10:43.135 fused_ordering(791) 00:10:43.135 fused_ordering(792) 00:10:43.135 fused_ordering(793) 00:10:43.135 fused_ordering(794) 00:10:43.135 fused_ordering(795) 00:10:43.135 fused_ordering(796) 00:10:43.135 fused_ordering(797) 00:10:43.135 fused_ordering(798) 00:10:43.135 fused_ordering(799) 00:10:43.135 fused_ordering(800) 00:10:43.135 fused_ordering(801) 00:10:43.135 fused_ordering(802) 00:10:43.135 fused_ordering(803) 00:10:43.135 fused_ordering(804) 00:10:43.135 fused_ordering(805) 00:10:43.135 fused_ordering(806) 00:10:43.135 fused_ordering(807) 00:10:43.135 fused_ordering(808) 00:10:43.135 fused_ordering(809) 00:10:43.135 fused_ordering(810) 00:10:43.135 fused_ordering(811) 00:10:43.135 fused_ordering(812) 00:10:43.135 fused_ordering(813) 00:10:43.135 fused_ordering(814) 00:10:43.135 fused_ordering(815) 00:10:43.135 fused_ordering(816) 00:10:43.135 fused_ordering(817) 00:10:43.135 fused_ordering(818) 00:10:43.135 fused_ordering(819) 00:10:43.135 fused_ordering(820) 00:10:43.699 fused_ordering(821) 00:10:43.699 fused_ordering(822) 00:10:43.699 fused_ordering(823) 00:10:43.699 fused_ordering(824) 00:10:43.699 fused_ordering(825) 00:10:43.699 fused_ordering(826) 00:10:43.699 fused_ordering(827) 00:10:43.699 fused_ordering(828) 00:10:43.699 fused_ordering(829) 00:10:43.699 fused_ordering(830) 00:10:43.699 fused_ordering(831) 00:10:43.699 fused_ordering(832) 00:10:43.699 fused_ordering(833) 00:10:43.699 fused_ordering(834) 00:10:43.699 fused_ordering(835) 00:10:43.699 fused_ordering(836) 00:10:43.699 fused_ordering(837) 00:10:43.699 fused_ordering(838) 00:10:43.699 fused_ordering(839) 00:10:43.699 fused_ordering(840) 00:10:43.699 fused_ordering(841) 00:10:43.699 fused_ordering(842) 00:10:43.699 fused_ordering(843) 00:10:43.699 fused_ordering(844) 00:10:43.699 fused_ordering(845) 00:10:43.699 fused_ordering(846) 00:10:43.699 fused_ordering(847) 00:10:43.699 fused_ordering(848) 00:10:43.699 fused_ordering(849) 00:10:43.699 fused_ordering(850) 00:10:43.699 fused_ordering(851) 00:10:43.699 fused_ordering(852) 00:10:43.699 fused_ordering(853) 00:10:43.699 fused_ordering(854) 00:10:43.699 fused_ordering(855) 00:10:43.699 fused_ordering(856) 00:10:43.699 fused_ordering(857) 00:10:43.699 fused_ordering(858) 00:10:43.699 fused_ordering(859) 00:10:43.699 fused_ordering(860) 00:10:43.699 fused_ordering(861) 00:10:43.699 fused_ordering(862) 00:10:43.700 fused_ordering(863) 00:10:43.700 fused_ordering(864) 00:10:43.700 fused_ordering(865) 00:10:43.700 fused_ordering(866) 00:10:43.700 fused_ordering(867) 00:10:43.700 fused_ordering(868) 00:10:43.700 fused_ordering(869) 00:10:43.700 fused_ordering(870) 00:10:43.700 fused_ordering(871) 00:10:43.700 fused_ordering(872) 00:10:43.700 fused_ordering(873) 00:10:43.700 fused_ordering(874) 00:10:43.700 fused_ordering(875) 00:10:43.700 fused_ordering(876) 00:10:43.700 fused_ordering(877) 00:10:43.700 fused_ordering(878) 00:10:43.700 fused_ordering(879) 00:10:43.700 fused_ordering(880) 00:10:43.700 fused_ordering(881) 00:10:43.700 fused_ordering(882) 00:10:43.700 fused_ordering(883) 00:10:43.700 fused_ordering(884) 00:10:43.700 fused_ordering(885) 00:10:43.700 fused_ordering(886) 00:10:43.700 fused_ordering(887) 00:10:43.700 fused_ordering(888) 00:10:43.700 fused_ordering(889) 00:10:43.700 fused_ordering(890) 00:10:43.700 fused_ordering(891) 00:10:43.700 fused_ordering(892) 00:10:43.700 fused_ordering(893) 00:10:43.700 fused_ordering(894) 00:10:43.700 fused_ordering(895) 00:10:43.700 fused_ordering(896) 00:10:43.700 fused_ordering(897) 00:10:43.700 fused_ordering(898) 00:10:43.700 fused_ordering(899) 00:10:43.700 fused_ordering(900) 00:10:43.700 fused_ordering(901) 00:10:43.700 fused_ordering(902) 00:10:43.700 fused_ordering(903) 00:10:43.700 fused_ordering(904) 00:10:43.700 fused_ordering(905) 00:10:43.700 fused_ordering(906) 00:10:43.700 fused_ordering(907) 00:10:43.700 fused_ordering(908) 00:10:43.700 fused_ordering(909) 00:10:43.700 fused_ordering(910) 00:10:43.700 fused_ordering(911) 00:10:43.700 fused_ordering(912) 00:10:43.700 fused_ordering(913) 00:10:43.700 fused_ordering(914) 00:10:43.700 fused_ordering(915) 00:10:43.700 fused_ordering(916) 00:10:43.700 fused_ordering(917) 00:10:43.700 fused_ordering(918) 00:10:43.700 fused_ordering(919) 00:10:43.700 fused_ordering(920) 00:10:43.700 fused_ordering(921) 00:10:43.700 fused_ordering(922) 00:10:43.700 fused_ordering(923) 00:10:43.700 fused_ordering(924) 00:10:43.700 fused_ordering(925) 00:10:43.700 fused_ordering(926) 00:10:43.700 fused_ordering(927) 00:10:43.700 fused_ordering(928) 00:10:43.700 fused_ordering(929) 00:10:43.700 fused_ordering(930) 00:10:43.700 fused_ordering(931) 00:10:43.700 fused_ordering(932) 00:10:43.700 fused_ordering(933) 00:10:43.700 fused_ordering(934) 00:10:43.700 fused_ordering(935) 00:10:43.700 fused_ordering(936) 00:10:43.700 fused_ordering(937) 00:10:43.700 fused_ordering(938) 00:10:43.700 fused_ordering(939) 00:10:43.700 fused_ordering(940) 00:10:43.700 fused_ordering(941) 00:10:43.700 fused_ordering(942) 00:10:43.700 fused_ordering(943) 00:10:43.700 fused_ordering(944) 00:10:43.700 fused_ordering(945) 00:10:43.700 fused_ordering(946) 00:10:43.700 fused_ordering(947) 00:10:43.700 fused_ordering(948) 00:10:43.700 fused_ordering(949) 00:10:43.700 fused_ordering(950) 00:10:43.700 fused_ordering(951) 00:10:43.700 fused_ordering(952) 00:10:43.700 fused_ordering(953) 00:10:43.700 fused_ordering(954) 00:10:43.700 fused_ordering(955) 00:10:43.700 fused_ordering(956) 00:10:43.700 fused_ordering(957) 00:10:43.700 fused_ordering(958) 00:10:43.700 fused_ordering(959) 00:10:43.700 fused_ordering(960) 00:10:43.700 fused_ordering(961) 00:10:43.700 fused_ordering(962) 00:10:43.700 fused_ordering(963) 00:10:43.700 fused_ordering(964) 00:10:43.700 fused_ordering(965) 00:10:43.700 fused_ordering(966) 00:10:43.700 fused_ordering(967) 00:10:43.700 fused_ordering(968) 00:10:43.700 fused_ordering(969) 00:10:43.700 fused_ordering(970) 00:10:43.700 fused_ordering(971) 00:10:43.700 fused_ordering(972) 00:10:43.700 fused_ordering(973) 00:10:43.700 fused_ordering(974) 00:10:43.700 fused_ordering(975) 00:10:43.700 fused_ordering(976) 00:10:43.700 fused_ordering(977) 00:10:43.700 fused_ordering(978) 00:10:43.700 fused_ordering(979) 00:10:43.700 fused_ordering(980) 00:10:43.700 fused_ordering(981) 00:10:43.700 fused_ordering(982) 00:10:43.700 fused_ordering(983) 00:10:43.700 fused_ordering(984) 00:10:43.700 fused_ordering(985) 00:10:43.700 fused_ordering(986) 00:10:43.700 fused_ordering(987) 00:10:43.700 fused_ordering(988) 00:10:43.700 fused_ordering(989) 00:10:43.700 fused_ordering(990) 00:10:43.700 fused_ordering(991) 00:10:43.700 fused_ordering(992) 00:10:43.700 fused_ordering(993) 00:10:43.700 fused_ordering(994) 00:10:43.700 fused_ordering(995) 00:10:43.700 fused_ordering(996) 00:10:43.700 fused_ordering(997) 00:10:43.700 fused_ordering(998) 00:10:43.700 fused_ordering(999) 00:10:43.700 fused_ordering(1000) 00:10:43.700 fused_ordering(1001) 00:10:43.700 fused_ordering(1002) 00:10:43.700 fused_ordering(1003) 00:10:43.700 fused_ordering(1004) 00:10:43.700 fused_ordering(1005) 00:10:43.700 fused_ordering(1006) 00:10:43.700 fused_ordering(1007) 00:10:43.700 fused_ordering(1008) 00:10:43.700 fused_ordering(1009) 00:10:43.700 fused_ordering(1010) 00:10:43.700 fused_ordering(1011) 00:10:43.700 fused_ordering(1012) 00:10:43.700 fused_ordering(1013) 00:10:43.700 fused_ordering(1014) 00:10:43.700 fused_ordering(1015) 00:10:43.700 fused_ordering(1016) 00:10:43.700 fused_ordering(1017) 00:10:43.700 fused_ordering(1018) 00:10:43.700 fused_ordering(1019) 00:10:43.700 fused_ordering(1020) 00:10:43.700 fused_ordering(1021) 00:10:43.700 fused_ordering(1022) 00:10:43.700 fused_ordering(1023) 00:10:43.700 13:49:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:43.700 13:49:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:43.700 13:49:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:43.700 13:49:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:10:43.700 13:49:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:43.700 13:49:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:10:43.700 13:49:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:43.700 13:49:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:43.700 rmmod nvme_tcp 00:10:43.700 rmmod nvme_fabrics 00:10:43.700 rmmod nvme_keyring 00:10:43.700 13:49:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:43.700 13:49:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:10:43.700 13:49:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:10:43.701 13:49:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3688529 ']' 00:10:43.701 13:49:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3688529 00:10:43.701 13:49:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 3688529 ']' 00:10:43.701 13:49:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 3688529 00:10:43.701 13:49:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:10:43.701 13:49:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:43.701 13:49:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3688529 00:10:43.959 13:49:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:43.960 13:49:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:43.960 13:49:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3688529' 00:10:43.960 killing process with pid 3688529 00:10:43.960 13:49:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 3688529 00:10:43.960 13:49:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 3688529 00:10:43.960 13:49:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:43.960 13:49:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:43.960 13:49:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:43.960 13:49:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:43.960 13:49:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:43.960 13:49:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.960 13:49:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:43.960 13:49:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.502 13:49:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:46.502 00:10:46.502 real 0m7.557s 00:10:46.502 user 0m4.686s 00:10:46.502 sys 0m3.619s 00:10:46.502 13:49:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:46.502 13:49:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:46.502 ************************************ 00:10:46.502 END TEST nvmf_fused_ordering 00:10:46.502 ************************************ 00:10:46.502 13:49:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:46.502 13:49:40 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:46.502 13:49:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:46.502 13:49:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.502 13:49:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:46.502 ************************************ 00:10:46.502 START TEST nvmf_delete_subsystem 00:10:46.502 ************************************ 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:46.502 * Looking for test storage... 00:10:46.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:46.502 13:49:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:48.403 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:48.404 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:48.404 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:48.404 Found net devices under 0000:84:00.0: cvl_0_0 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:48.404 Found net devices under 0000:84:00.1: cvl_0_1 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:48.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:48.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:10:48.404 00:10:48.404 --- 10.0.0.2 ping statistics --- 00:10:48.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.404 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:48.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:48.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:10:48.404 00:10:48.404 --- 10.0.0.1 ping statistics --- 00:10:48.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.404 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3690883 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3690883 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 3690883 ']' 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:48.404 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.663 [2024-07-15 13:49:43.260269] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:10:48.663 [2024-07-15 13:49:43.260368] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.663 EAL: No free 2048 kB hugepages reported on node 1 00:10:48.663 [2024-07-15 13:49:43.323633] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:48.663 [2024-07-15 13:49:43.425900] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.663 [2024-07-15 13:49:43.425953] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.663 [2024-07-15 13:49:43.425973] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.663 [2024-07-15 13:49:43.425983] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.663 [2024-07-15 13:49:43.425992] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.663 [2024-07-15 13:49:43.426078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.663 [2024-07-15 13:49:43.426083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.921 [2024-07-15 13:49:43.574802] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.921 [2024-07-15 13:49:43.590997] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.921 NULL1 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.921 Delay0 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3690911 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:48.921 13:49:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:48.921 EAL: No free 2048 kB hugepages reported on node 1 00:10:48.921 [2024-07-15 13:49:43.665677] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:50.833 13:49:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:50.834 13:49:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.834 13:49:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:51.091 Write completed with error (sct=0, sc=8) 00:10:51.091 Write completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 starting I/O failed: -6 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 starting I/O failed: -6 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 starting I/O failed: -6 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Write completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 starting I/O failed: -6 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Write completed with error (sct=0, sc=8) 00:10:51.091 Write completed with error (sct=0, sc=8) 00:10:51.091 Write completed with error (sct=0, sc=8) 00:10:51.091 starting I/O failed: -6 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 starting I/O failed: -6 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 starting I/O failed: -6 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 starting I/O failed: -6 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Write completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 starting I/O failed: -6 00:10:51.091 Write completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Write completed with error (sct=0, sc=8) 00:10:51.091 starting I/O failed: -6 00:10:51.091 [2024-07-15 13:49:45.755224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225e3e0 is same with the state(5) to be set 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Write completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Write completed with error (sct=0, sc=8) 00:10:51.091 starting I/O failed: -6 00:10:51.091 Write completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Write completed with error (sct=0, sc=8) 00:10:51.091 Write completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 starting I/O failed: -6 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Write completed with error (sct=0, sc=8) 00:10:51.091 Write completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Write completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.091 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 starting I/O failed: -6 00:10:51.092 Write completed with error (sct=0, sc=8) 00:10:51.092 Write completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Write completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 starting I/O failed: -6 00:10:51.092 Write completed with error (sct=0, sc=8) 00:10:51.092 Write completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Write completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Write completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 starting I/O failed: -6 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Write completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Write completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 starting I/O failed: -6 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Write completed with error (sct=0, sc=8) 00:10:51.092 Write completed with error (sct=0, sc=8) 00:10:51.092 Write completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Write completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 starting I/O failed: -6 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Write completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Write completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 starting I/O failed: -6 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 starting I/O failed: -6 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Write completed with error (sct=0, sc=8) 00:10:51.092 Write completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 starting I/O failed: -6 00:10:51.092 Write completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 [2024-07-15 13:49:45.755919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f815c00d2f0 is same with the state(5) to be set 00:10:51.092 Write completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Write completed with error (sct=0, sc=8) 00:10:51.092 Write completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Write completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Write completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Write completed with error (sct=0, sc=8) 00:10:51.092 Write completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Write completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Write completed with error (sct=0, sc=8) 00:10:51.092 Write completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:51.092 Read completed with error (sct=0, sc=8) 00:10:52.022 [2024-07-15 13:49:46.721875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fac0 is same with the state(5) to be set 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 [2024-07-15 13:49:46.754701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f815c00cfe0 is same with the state(5) to be set 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 [2024-07-15 13:49:46.754900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f815c00d600 is same with the state(5) to be set 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 [2024-07-15 13:49:46.757545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225e5c0 is same with the state(5) to be set 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Write completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.022 Read completed with error (sct=0, sc=8) 00:10:52.023 Read completed with error (sct=0, sc=8) 00:10:52.023 Read completed with error (sct=0, sc=8) 00:10:52.023 Read completed with error (sct=0, sc=8) 00:10:52.023 Read completed with error (sct=0, sc=8) 00:10:52.023 Read completed with error (sct=0, sc=8) 00:10:52.023 Read completed with error (sct=0, sc=8) 00:10:52.023 Read completed with error (sct=0, sc=8) 00:10:52.023 Read completed with error (sct=0, sc=8) 00:10:52.023 Read completed with error (sct=0, sc=8) 00:10:52.023 Read completed with error (sct=0, sc=8) 00:10:52.023 [2024-07-15 13:49:46.757703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225e980 is same with the state(5) to be set 00:10:52.023 Initializing NVMe Controllers 00:10:52.023 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:52.023 Controller IO queue size 128, less than required. 00:10:52.023 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:52.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:52.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:52.023 Initialization complete. Launching workers. 00:10:52.023 ======================================================== 00:10:52.023 Latency(us) 00:10:52.023 Device Information : IOPS MiB/s Average min max 00:10:52.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 154.43 0.08 957488.04 594.92 2002177.88 00:10:52.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 160.39 0.08 949463.41 352.84 1998196.22 00:10:52.023 ======================================================== 00:10:52.023 Total : 314.82 0.15 953399.79 352.84 2002177.88 00:10:52.023 00:10:52.023 [2024-07-15 13:49:46.758670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x225fac0 (9): Bad file descriptor 00:10:52.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:52.023 13:49:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.023 13:49:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:52.023 13:49:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3690911 00:10:52.023 13:49:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3690911 00:10:52.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3690911) - No such process 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3690911 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3690911 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3690911 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:52.585 [2024-07-15 13:49:47.280168] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3691336 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3691336 00:10:52.585 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:52.585 EAL: No free 2048 kB hugepages reported on node 1 00:10:52.585 [2024-07-15 13:49:47.347624] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:53.148 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:53.148 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3691336 00:10:53.148 13:49:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:53.711 13:49:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:53.711 13:49:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3691336 00:10:53.711 13:49:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:53.968 13:49:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:53.968 13:49:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3691336 00:10:53.968 13:49:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:54.531 13:49:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:54.531 13:49:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3691336 00:10:54.531 13:49:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:55.092 13:49:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:55.092 13:49:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3691336 00:10:55.092 13:49:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:55.657 13:49:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:55.657 13:49:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3691336 00:10:55.657 13:49:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:55.657 Initializing NVMe Controllers 00:10:55.657 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:55.657 Controller IO queue size 128, less than required. 00:10:55.657 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:55.657 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:55.657 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:55.657 Initialization complete. Launching workers. 00:10:55.657 ======================================================== 00:10:55.657 Latency(us) 00:10:55.657 Device Information : IOPS MiB/s Average min max 00:10:55.657 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004155.13 1000212.80 1012240.59 00:10:55.657 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003289.06 1000184.84 1011441.34 00:10:55.657 ======================================================== 00:10:55.657 Total : 256.00 0.12 1003722.09 1000184.84 1012240.59 00:10:55.657 00:10:56.221 13:49:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:56.221 13:49:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3691336 00:10:56.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3691336) - No such process 00:10:56.221 13:49:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3691336 00:10:56.221 13:49:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:56.221 13:49:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:56.221 13:49:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:56.221 13:49:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:10:56.221 13:49:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:56.221 13:49:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:10:56.221 13:49:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:56.221 13:49:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:56.221 rmmod nvme_tcp 00:10:56.221 rmmod nvme_fabrics 00:10:56.221 rmmod nvme_keyring 00:10:56.221 13:49:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:56.221 13:49:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:10:56.221 13:49:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:10:56.221 13:49:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3690883 ']' 00:10:56.221 13:49:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3690883 00:10:56.221 13:49:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 3690883 ']' 00:10:56.221 13:49:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 3690883 00:10:56.221 13:49:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:10:56.221 13:49:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:56.221 13:49:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3690883 00:10:56.221 13:49:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:56.221 13:49:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:56.221 13:49:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3690883' 00:10:56.221 killing process with pid 3690883 00:10:56.221 13:49:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 3690883 00:10:56.221 13:49:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 3690883 00:10:56.481 13:49:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:56.481 13:49:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:56.481 13:49:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:56.481 13:49:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:56.481 13:49:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:56.481 13:49:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.481 13:49:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:56.481 13:49:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.386 13:49:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:58.386 00:10:58.386 real 0m12.302s 00:10:58.386 user 0m27.566s 00:10:58.386 sys 0m3.003s 00:10:58.386 13:49:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:58.386 13:49:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:58.386 ************************************ 00:10:58.386 END TEST nvmf_delete_subsystem 00:10:58.386 ************************************ 00:10:58.386 13:49:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:58.387 13:49:53 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:10:58.387 13:49:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:58.387 13:49:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.387 13:49:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:58.645 ************************************ 00:10:58.645 START TEST nvmf_ns_masking 00:10:58.645 ************************************ 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:10:58.645 * Looking for test storage... 00:10:58.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=0eb9ef48-3c92-486c-8210-0235595f391a 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=67d0d2d9-fc09-403e-8851-c67ea7da871a 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=fa24fb25-912d-40eb-bee2-1dc8616cd59c 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:10:58.645 13:49:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:01.173 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:01.173 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:01.173 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:01.173 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:01.173 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:01.173 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:01.173 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:01.173 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:01.173 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:01.173 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:01.173 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:01.173 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:01.173 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:01.173 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:01.173 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:01.173 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:01.173 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:01.173 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:01.173 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:01.173 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:01.174 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:01.174 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:01.174 Found net devices under 0000:84:00.0: cvl_0_0 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:01.174 Found net devices under 0000:84:00.1: cvl_0_1 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:01.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:11:01.174 00:11:01.174 --- 10.0.0.2 ping statistics --- 00:11:01.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.174 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:01.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:11:01.174 00:11:01.174 --- 10.0.0.1 ping statistics --- 00:11:01.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.174 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3693797 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3693797 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3693797 ']' 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:01.174 [2024-07-15 13:49:55.638431] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:11:01.174 [2024-07-15 13:49:55.638521] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.174 EAL: No free 2048 kB hugepages reported on node 1 00:11:01.174 [2024-07-15 13:49:55.703086] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.174 [2024-07-15 13:49:55.813132] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.174 [2024-07-15 13:49:55.813197] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.174 [2024-07-15 13:49:55.813210] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.174 [2024-07-15 13:49:55.813221] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.174 [2024-07-15 13:49:55.813231] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.174 [2024-07-15 13:49:55.813259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:01.174 13:49:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:01.175 13:49:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.175 13:49:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:01.430 [2024-07-15 13:49:56.231870] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.430 13:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:01.430 13:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:01.430 13:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:01.687 Malloc1 00:11:01.944 13:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:01.944 Malloc2 00:11:02.201 13:49:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:02.201 13:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:02.766 13:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:02.766 [2024-07-15 13:49:57.539798] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:02.766 13:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:02.766 13:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fa24fb25-912d-40eb-bee2-1dc8616cd59c -a 10.0.0.2 -s 4420 -i 4 00:11:03.022 13:49:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:03.022 13:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:03.022 13:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:03.022 13:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:03.022 13:49:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:04.916 13:49:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:04.916 13:49:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:04.916 13:49:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:04.916 13:49:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:04.916 13:49:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:04.916 13:49:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:04.916 13:49:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:04.916 13:49:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:04.916 13:49:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:04.916 13:49:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:04.916 13:49:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:04.916 13:49:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:04.916 13:49:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:05.173 [ 0]:0x1 00:11:05.173 13:49:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:05.173 13:49:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:05.173 13:49:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=83277e6b2ccd43a0a4be65489be8fd42 00:11:05.173 13:49:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 83277e6b2ccd43a0a4be65489be8fd42 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:05.173 13:49:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:05.430 13:50:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:05.430 13:50:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:05.430 13:50:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:05.430 [ 0]:0x1 00:11:05.430 13:50:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:05.430 13:50:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:05.430 13:50:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=83277e6b2ccd43a0a4be65489be8fd42 00:11:05.430 13:50:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 83277e6b2ccd43a0a4be65489be8fd42 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:05.430 13:50:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:05.430 13:50:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:05.430 13:50:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:05.430 [ 1]:0x2 00:11:05.430 13:50:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:05.430 13:50:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:05.430 13:50:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f9f10d6fa6f462599fd9912efee6af5 00:11:05.430 13:50:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f9f10d6fa6f462599fd9912efee6af5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:05.430 13:50:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:05.430 13:50:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:05.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.687 13:50:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.976 13:50:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:06.268 13:50:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:06.268 13:50:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fa24fb25-912d-40eb-bee2-1dc8616cd59c -a 10.0.0.2 -s 4420 -i 4 00:11:06.525 13:50:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:06.525 13:50:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:06.525 13:50:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:06.525 13:50:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:11:06.525 13:50:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:11:06.525 13:50:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:08.416 13:50:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:08.416 13:50:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:08.416 13:50:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:08.416 13:50:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:08.416 13:50:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:08.416 13:50:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:08.416 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:08.416 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:08.673 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:08.673 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:08.673 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:08.673 13:50:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:08.673 13:50:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:08.673 13:50:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:08.673 13:50:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:08.673 13:50:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:08.673 13:50:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:08.673 13:50:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:08.673 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:08.673 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:08.673 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:08.673 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:08.673 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:08.673 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:08.673 13:50:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:08.673 13:50:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:08.673 13:50:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:08.673 13:50:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:08.673 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:08.673 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:08.673 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:08.673 [ 0]:0x2 00:11:08.673 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:08.673 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:08.673 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f9f10d6fa6f462599fd9912efee6af5 00:11:08.673 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f9f10d6fa6f462599fd9912efee6af5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:08.673 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:08.930 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:08.930 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:08.930 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:08.930 [ 0]:0x1 00:11:08.930 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:08.930 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:08.930 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=83277e6b2ccd43a0a4be65489be8fd42 00:11:08.930 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 83277e6b2ccd43a0a4be65489be8fd42 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:08.930 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:08.930 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:08.930 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:08.930 [ 1]:0x2 00:11:08.930 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:08.930 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:08.930 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f9f10d6fa6f462599fd9912efee6af5 00:11:08.930 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f9f10d6fa6f462599fd9912efee6af5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:08.930 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:09.186 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:09.186 13:50:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:09.186 13:50:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:09.186 13:50:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:09.186 13:50:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:09.186 13:50:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:09.187 13:50:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:09.187 13:50:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:09.187 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:09.187 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:09.187 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:09.187 13:50:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:09.443 13:50:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:09.443 13:50:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:09.443 13:50:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:09.443 13:50:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:09.443 13:50:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:09.443 13:50:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:09.443 13:50:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:09.443 13:50:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:09.443 13:50:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:09.443 [ 0]:0x2 00:11:09.443 13:50:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:09.443 13:50:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:09.443 13:50:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f9f10d6fa6f462599fd9912efee6af5 00:11:09.443 13:50:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f9f10d6fa6f462599fd9912efee6af5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:09.443 13:50:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:09.443 13:50:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:09.443 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.443 13:50:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:09.700 13:50:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:09.700 13:50:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fa24fb25-912d-40eb-bee2-1dc8616cd59c -a 10.0.0.2 -s 4420 -i 4 00:11:09.700 13:50:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:09.700 13:50:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:09.700 13:50:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:09.700 13:50:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:09.700 13:50:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:09.700 13:50:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:12.224 13:50:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:12.224 13:50:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:12.224 13:50:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:12.224 13:50:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:12.224 13:50:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:12.224 13:50:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:12.224 13:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:12.224 13:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:12.224 13:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:12.224 13:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:12.224 13:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:12.224 13:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:12.224 13:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:12.224 [ 0]:0x1 00:11:12.224 13:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:12.224 13:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:12.224 13:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=83277e6b2ccd43a0a4be65489be8fd42 00:11:12.224 13:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 83277e6b2ccd43a0a4be65489be8fd42 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.224 13:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:12.224 13:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:12.224 13:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:12.224 [ 1]:0x2 00:11:12.224 13:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:12.224 13:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:12.224 13:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f9f10d6fa6f462599fd9912efee6af5 00:11:12.224 13:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f9f10d6fa6f462599fd9912efee6af5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.224 13:50:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:12.481 [ 0]:0x2 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f9f10d6fa6f462599fd9912efee6af5 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f9f10d6fa6f462599fd9912efee6af5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:12.481 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:12.739 [2024-07-15 13:50:07.557569] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:12.739 request: 00:11:12.739 { 00:11:12.739 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:12.739 "nsid": 2, 00:11:12.739 "host": "nqn.2016-06.io.spdk:host1", 00:11:12.739 "method": "nvmf_ns_remove_host", 00:11:12.739 "req_id": 1 00:11:12.739 } 00:11:12.739 Got JSON-RPC error response 00:11:12.739 response: 00:11:12.739 { 00:11:12.739 "code": -32602, 00:11:12.739 "message": "Invalid parameters" 00:11:12.739 } 00:11:12.739 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:12.739 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:12.739 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:12.739 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:12.997 [ 0]:0x2 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f9f10d6fa6f462599fd9912efee6af5 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f9f10d6fa6f462599fd9912efee6af5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:12.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3695419 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3695419 /var/tmp/host.sock 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3695419 ']' 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:12.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:12.997 13:50:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:13.255 [2024-07-15 13:50:07.844603] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:11:13.255 [2024-07-15 13:50:07.844685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3695419 ] 00:11:13.255 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.255 [2024-07-15 13:50:07.905643] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.255 [2024-07-15 13:50:08.017291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.513 13:50:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:13.514 13:50:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:13.514 13:50:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.771 13:50:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:14.028 13:50:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 0eb9ef48-3c92-486c-8210-0235595f391a 00:11:14.028 13:50:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:14.028 13:50:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 0EB9EF483C92486C82100235595F391A -i 00:11:14.286 13:50:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 67d0d2d9-fc09-403e-8851-c67ea7da871a 00:11:14.286 13:50:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:14.286 13:50:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 67D0D2D9FC09403E8851C67EA7DA871A -i 00:11:14.543 13:50:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:14.800 13:50:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:15.056 13:50:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:15.056 13:50:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:15.634 nvme0n1 00:11:15.634 13:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:15.634 13:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:16.199 nvme1n2 00:11:16.199 13:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:16.199 13:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:16.199 13:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:16.199 13:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:16.199 13:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:16.199 13:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:16.199 13:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:16.199 13:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:16.199 13:50:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:16.457 13:50:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 0eb9ef48-3c92-486c-8210-0235595f391a == \0\e\b\9\e\f\4\8\-\3\c\9\2\-\4\8\6\c\-\8\2\1\0\-\0\2\3\5\5\9\5\f\3\9\1\a ]] 00:11:16.457 13:50:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:16.457 13:50:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:16.457 13:50:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:16.715 13:50:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 67d0d2d9-fc09-403e-8851-c67ea7da871a == \6\7\d\0\d\2\d\9\-\f\c\0\9\-\4\0\3\e\-\8\8\5\1\-\c\6\7\e\a\7\d\a\8\7\1\a ]] 00:11:16.715 13:50:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3695419 00:11:16.715 13:50:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3695419 ']' 00:11:16.715 13:50:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3695419 00:11:16.715 13:50:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:16.715 13:50:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:16.715 13:50:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3695419 00:11:16.715 13:50:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:16.715 13:50:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:16.715 13:50:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3695419' 00:11:16.715 killing process with pid 3695419 00:11:16.715 13:50:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3695419 00:11:16.715 13:50:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3695419 00:11:17.279 13:50:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:17.537 13:50:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:11:17.537 13:50:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:11:17.537 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:17.537 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:17.537 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:17.537 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:17.537 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:17.537 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:17.537 rmmod nvme_tcp 00:11:17.537 rmmod nvme_fabrics 00:11:17.537 rmmod nvme_keyring 00:11:17.537 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:17.537 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:17.537 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:17.537 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3693797 ']' 00:11:17.537 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3693797 00:11:17.537 13:50:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3693797 ']' 00:11:17.537 13:50:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3693797 00:11:17.537 13:50:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:17.537 13:50:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:17.537 13:50:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3693797 00:11:17.537 13:50:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:17.537 13:50:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:17.537 13:50:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3693797' 00:11:17.537 killing process with pid 3693797 00:11:17.537 13:50:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3693797 00:11:17.537 13:50:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3693797 00:11:17.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:17.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:17.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:17.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:17.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:17.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:17.810 13:50:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.337 13:50:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:20.337 00:11:20.337 real 0m21.400s 00:11:20.337 user 0m27.708s 00:11:20.337 sys 0m4.188s 00:11:20.337 13:50:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:20.337 13:50:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:20.337 ************************************ 00:11:20.337 END TEST nvmf_ns_masking 00:11:20.337 ************************************ 00:11:20.337 13:50:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:20.337 13:50:14 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:20.337 13:50:14 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:20.337 13:50:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:20.337 13:50:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:20.337 13:50:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:20.337 ************************************ 00:11:20.337 START TEST nvmf_nvme_cli 00:11:20.337 ************************************ 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:20.337 * Looking for test storage... 00:11:20.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:20.337 13:50:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:22.234 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:22.234 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:22.234 Found net devices under 0000:84:00.0: cvl_0_0 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:22.234 Found net devices under 0000:84:00.1: cvl_0_1 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:22.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:22.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:11:22.234 00:11:22.234 --- 10.0.0.2 ping statistics --- 00:11:22.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.234 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:22.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:22.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:11:22.234 00:11:22.234 --- 10.0.0.1 ping statistics --- 00:11:22.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.234 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:22.234 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:22.235 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:22.235 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:22.235 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:22.235 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:22.235 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:22.235 13:50:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:22.235 13:50:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:22.235 13:50:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:22.235 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:22.235 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:22.235 13:50:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3697933 00:11:22.235 13:50:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:22.235 13:50:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3697933 00:11:22.235 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 3697933 ']' 00:11:22.235 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.235 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:22.235 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.235 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:22.235 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:22.492 [2024-07-15 13:50:17.075979] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:11:22.492 [2024-07-15 13:50:17.076076] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.492 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.492 [2024-07-15 13:50:17.137991] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:22.492 [2024-07-15 13:50:17.241466] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.492 [2024-07-15 13:50:17.241525] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.492 [2024-07-15 13:50:17.241552] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:22.492 [2024-07-15 13:50:17.241563] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:22.492 [2024-07-15 13:50:17.241572] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.492 [2024-07-15 13:50:17.241705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.492 [2024-07-15 13:50:17.241817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:22.492 [2024-07-15 13:50:17.241843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:22.492 [2024-07-15 13:50:17.241846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:22.750 [2024-07-15 13:50:17.407624] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:22.750 Malloc0 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:22.750 Malloc1 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:22.750 [2024-07-15 13:50:17.493881] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:11:22.750 00:11:22.750 Discovery Log Number of Records 2, Generation counter 2 00:11:22.750 =====Discovery Log Entry 0====== 00:11:22.750 trtype: tcp 00:11:22.750 adrfam: ipv4 00:11:22.750 subtype: current discovery subsystem 00:11:22.750 treq: not required 00:11:22.750 portid: 0 00:11:22.750 trsvcid: 4420 00:11:22.750 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:22.750 traddr: 10.0.0.2 00:11:22.750 eflags: explicit discovery connections, duplicate discovery information 00:11:22.750 sectype: none 00:11:22.750 =====Discovery Log Entry 1====== 00:11:22.750 trtype: tcp 00:11:22.750 adrfam: ipv4 00:11:22.750 subtype: nvme subsystem 00:11:22.750 treq: not required 00:11:22.750 portid: 0 00:11:22.750 trsvcid: 4420 00:11:22.750 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:22.750 traddr: 10.0.0.2 00:11:22.750 eflags: none 00:11:22.750 sectype: none 00:11:22.750 13:50:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:23.007 13:50:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:23.007 13:50:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:23.007 13:50:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:23.007 13:50:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:23.007 13:50:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:23.007 13:50:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:23.007 13:50:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:23.007 13:50:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:23.007 13:50:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:23.007 13:50:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:23.571 13:50:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:23.571 13:50:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:11:23.571 13:50:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:23.571 13:50:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:23.571 13:50:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:23.571 13:50:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:11:25.466 13:50:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:25.466 13:50:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:25.466 13:50:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:25.466 13:50:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:25.466 13:50:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:25.466 13:50:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:11:25.466 13:50:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:25.466 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:25.466 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:25.466 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:25.723 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:25.723 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:25.723 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:25.723 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:25.723 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:25.723 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:25.723 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:25.723 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:25.723 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:25.723 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:25.723 13:50:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:25.723 /dev/nvme0n1 ]] 00:11:25.723 13:50:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:25.723 13:50:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:25.723 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:25.723 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:25.723 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:25.980 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:25.980 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:25.980 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:25.980 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:25.980 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:25.980 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:25.980 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:25.980 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:25.980 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:25.980 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:25.980 13:50:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:25.980 13:50:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:26.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.237 13:50:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:26.238 rmmod nvme_tcp 00:11:26.238 rmmod nvme_fabrics 00:11:26.238 rmmod nvme_keyring 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3697933 ']' 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3697933 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 3697933 ']' 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 3697933 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3697933 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3697933' 00:11:26.238 killing process with pid 3697933 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 3697933 00:11:26.238 13:50:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 3697933 00:11:26.495 13:50:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:26.495 13:50:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:26.496 13:50:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:26.496 13:50:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:26.496 13:50:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:26.496 13:50:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.496 13:50:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:26.496 13:50:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.027 13:50:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:29.027 00:11:29.027 real 0m8.601s 00:11:29.027 user 0m16.284s 00:11:29.027 sys 0m2.320s 00:11:29.027 13:50:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:29.027 13:50:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:29.027 ************************************ 00:11:29.027 END TEST nvmf_nvme_cli 00:11:29.027 ************************************ 00:11:29.027 13:50:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:29.027 13:50:23 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:11:29.027 13:50:23 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:29.027 13:50:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:29.027 13:50:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:29.027 13:50:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:29.027 ************************************ 00:11:29.027 START TEST nvmf_vfio_user 00:11:29.027 ************************************ 00:11:29.027 13:50:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:29.027 * Looking for test storage... 00:11:29.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.027 13:50:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.027 13:50:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:11:29.027 13:50:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.027 13:50:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.027 13:50:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.027 13:50:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.027 13:50:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.027 13:50:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.027 13:50:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.027 13:50:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.027 13:50:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.027 13:50:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.027 13:50:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:29.027 13:50:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:29.027 13:50:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.027 13:50:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.027 13:50:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.027 13:50:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.027 13:50:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.027 13:50:23 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.027 13:50:23 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.027 13:50:23 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3698742 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3698742' 00:11:29.028 Process pid: 3698742 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3698742 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3698742 ']' 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:29.028 [2024-07-15 13:50:23.467080] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:11:29.028 [2024-07-15 13:50:23.467164] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.028 EAL: No free 2048 kB hugepages reported on node 1 00:11:29.028 [2024-07-15 13:50:23.530599] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:29.028 [2024-07-15 13:50:23.646434] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.028 [2024-07-15 13:50:23.646490] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.028 [2024-07-15 13:50:23.646519] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:29.028 [2024-07-15 13:50:23.646531] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:29.028 [2024-07-15 13:50:23.646540] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:29.028 [2024-07-15 13:50:23.646594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.028 [2024-07-15 13:50:23.646655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:29.028 [2024-07-15 13:50:23.646722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:29.028 [2024-07-15 13:50:23.646725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:11:29.028 13:50:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:29.959 13:50:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:30.216 13:50:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:30.216 13:50:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:30.216 13:50:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:30.216 13:50:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:30.216 13:50:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:30.473 Malloc1 00:11:30.473 13:50:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:30.731 13:50:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:31.006 13:50:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:31.274 13:50:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:31.274 13:50:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:31.274 13:50:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:31.531 Malloc2 00:11:31.531 13:50:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:31.787 13:50:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:32.044 13:50:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:32.301 13:50:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:11:32.301 13:50:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:11:32.301 13:50:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:32.301 13:50:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:32.301 13:50:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:11:32.301 13:50:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:32.301 [2024-07-15 13:50:27.074290] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:11:32.301 [2024-07-15 13:50:27.074333] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3699281 ] 00:11:32.301 EAL: No free 2048 kB hugepages reported on node 1 00:11:32.301 [2024-07-15 13:50:27.109133] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:11:32.301 [2024-07-15 13:50:27.117222] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:32.301 [2024-07-15 13:50:27.117250] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f26893ac000 00:11:32.301 [2024-07-15 13:50:27.118219] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:32.301 [2024-07-15 13:50:27.119212] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:32.301 [2024-07-15 13:50:27.120219] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:32.301 [2024-07-15 13:50:27.121222] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:32.301 [2024-07-15 13:50:27.122227] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:32.301 [2024-07-15 13:50:27.123233] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:32.302 [2024-07-15 13:50:27.124239] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:32.302 [2024-07-15 13:50:27.125238] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:32.302 [2024-07-15 13:50:27.126244] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:32.302 [2024-07-15 13:50:27.126265] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f26893a1000 00:11:32.302 [2024-07-15 13:50:27.127379] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:32.560 [2024-07-15 13:50:27.143717] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:11:32.560 [2024-07-15 13:50:27.147774] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:11:32.560 [2024-07-15 13:50:27.150396] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:32.560 [2024-07-15 13:50:27.150449] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:32.560 [2024-07-15 13:50:27.150536] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:11:32.560 [2024-07-15 13:50:27.150564] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:11:32.560 [2024-07-15 13:50:27.150573] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:11:32.560 [2024-07-15 13:50:27.151377] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:11:32.560 [2024-07-15 13:50:27.151404] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:11:32.560 [2024-07-15 13:50:27.151417] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:11:32.560 [2024-07-15 13:50:27.152380] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:32.560 [2024-07-15 13:50:27.152399] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:11:32.560 [2024-07-15 13:50:27.152412] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:11:32.560 [2024-07-15 13:50:27.153379] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:11:32.560 [2024-07-15 13:50:27.153398] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:32.560 [2024-07-15 13:50:27.154386] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:11:32.560 [2024-07-15 13:50:27.154404] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:11:32.560 [2024-07-15 13:50:27.154413] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:11:32.560 [2024-07-15 13:50:27.154425] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:32.560 [2024-07-15 13:50:27.154534] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:11:32.560 [2024-07-15 13:50:27.154542] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:32.560 [2024-07-15 13:50:27.154550] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:11:32.560 [2024-07-15 13:50:27.155392] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:11:32.560 [2024-07-15 13:50:27.156397] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:11:32.560 [2024-07-15 13:50:27.157399] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:32.560 [2024-07-15 13:50:27.158398] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:32.560 [2024-07-15 13:50:27.158523] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:32.560 [2024-07-15 13:50:27.159409] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:11:32.560 [2024-07-15 13:50:27.159428] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:32.560 [2024-07-15 13:50:27.159436] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:11:32.560 [2024-07-15 13:50:27.159460] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:11:32.560 [2024-07-15 13:50:27.159473] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:11:32.560 [2024-07-15 13:50:27.159500] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:32.560 [2024-07-15 13:50:27.159510] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:32.560 [2024-07-15 13:50:27.159529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:32.560 [2024-07-15 13:50:27.159594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:32.560 [2024-07-15 13:50:27.159610] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:11:32.560 [2024-07-15 13:50:27.159622] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:11:32.560 [2024-07-15 13:50:27.159630] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:11:32.560 [2024-07-15 13:50:27.159637] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:32.560 [2024-07-15 13:50:27.159645] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:11:32.560 [2024-07-15 13:50:27.159652] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:11:32.560 [2024-07-15 13:50:27.159659] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:11:32.560 [2024-07-15 13:50:27.159671] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:11:32.560 [2024-07-15 13:50:27.159686] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:32.560 [2024-07-15 13:50:27.159705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:32.560 [2024-07-15 13:50:27.159747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.560 [2024-07-15 13:50:27.159763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.560 [2024-07-15 13:50:27.159776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.560 [2024-07-15 13:50:27.159788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.560 [2024-07-15 13:50:27.159797] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:11:32.560 [2024-07-15 13:50:27.159814] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:32.560 [2024-07-15 13:50:27.159829] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:32.560 [2024-07-15 13:50:27.159842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:32.560 [2024-07-15 13:50:27.159852] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:11:32.560 [2024-07-15 13:50:27.159861] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:32.560 [2024-07-15 13:50:27.159872] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:11:32.560 [2024-07-15 13:50:27.159882] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:11:32.560 [2024-07-15 13:50:27.159902] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:32.560 [2024-07-15 13:50:27.159915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:32.560 [2024-07-15 13:50:27.159982] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:11:32.560 [2024-07-15 13:50:27.159997] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:11:32.560 [2024-07-15 13:50:27.160011] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:32.560 [2024-07-15 13:50:27.160019] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:32.560 [2024-07-15 13:50:27.160043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:32.560 [2024-07-15 13:50:27.160063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:32.560 [2024-07-15 13:50:27.160080] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:11:32.560 [2024-07-15 13:50:27.160112] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:11:32.560 [2024-07-15 13:50:27.160126] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:11:32.560 [2024-07-15 13:50:27.160138] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:32.560 [2024-07-15 13:50:27.160146] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:32.561 [2024-07-15 13:50:27.160155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:32.561 [2024-07-15 13:50:27.160186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:32.561 [2024-07-15 13:50:27.160207] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:32.561 [2024-07-15 13:50:27.160222] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:32.561 [2024-07-15 13:50:27.160234] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:32.561 [2024-07-15 13:50:27.160242] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:32.561 [2024-07-15 13:50:27.160251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:32.561 [2024-07-15 13:50:27.160262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:32.561 [2024-07-15 13:50:27.160275] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:32.561 [2024-07-15 13:50:27.160286] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:11:32.561 [2024-07-15 13:50:27.160300] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:11:32.561 [2024-07-15 13:50:27.160310] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:11:32.561 [2024-07-15 13:50:27.160321] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:32.561 [2024-07-15 13:50:27.160330] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:11:32.561 [2024-07-15 13:50:27.160338] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:11:32.561 [2024-07-15 13:50:27.160345] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:11:32.561 [2024-07-15 13:50:27.160353] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:11:32.561 [2024-07-15 13:50:27.160378] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:32.561 [2024-07-15 13:50:27.160396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:32.561 [2024-07-15 13:50:27.160415] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:32.561 [2024-07-15 13:50:27.160427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:32.561 [2024-07-15 13:50:27.160442] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:32.561 [2024-07-15 13:50:27.160454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:32.561 [2024-07-15 13:50:27.160470] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:32.561 [2024-07-15 13:50:27.160481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:32.561 [2024-07-15 13:50:27.160503] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:32.561 [2024-07-15 13:50:27.160513] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:32.561 [2024-07-15 13:50:27.160519] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:32.561 [2024-07-15 13:50:27.160525] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:32.561 [2024-07-15 13:50:27.160534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:32.561 [2024-07-15 13:50:27.160545] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:32.561 [2024-07-15 13:50:27.160553] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:32.561 [2024-07-15 13:50:27.160562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:32.561 [2024-07-15 13:50:27.160572] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:32.561 [2024-07-15 13:50:27.160580] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:32.561 [2024-07-15 13:50:27.160588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:32.561 [2024-07-15 13:50:27.160600] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:32.561 [2024-07-15 13:50:27.160608] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:32.561 [2024-07-15 13:50:27.160616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:32.561 [2024-07-15 13:50:27.160631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:32.561 [2024-07-15 13:50:27.160652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:32.561 [2024-07-15 13:50:27.160670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:32.561 [2024-07-15 13:50:27.160682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:32.561 ===================================================== 00:11:32.561 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:32.561 ===================================================== 00:11:32.561 Controller Capabilities/Features 00:11:32.561 ================================ 00:11:32.561 Vendor ID: 4e58 00:11:32.561 Subsystem Vendor ID: 4e58 00:11:32.561 Serial Number: SPDK1 00:11:32.561 Model Number: SPDK bdev Controller 00:11:32.561 Firmware Version: 24.09 00:11:32.561 Recommended Arb Burst: 6 00:11:32.561 IEEE OUI Identifier: 8d 6b 50 00:11:32.561 Multi-path I/O 00:11:32.561 May have multiple subsystem ports: Yes 00:11:32.561 May have multiple controllers: Yes 00:11:32.561 Associated with SR-IOV VF: No 00:11:32.561 Max Data Transfer Size: 131072 00:11:32.561 Max Number of Namespaces: 32 00:11:32.561 Max Number of I/O Queues: 127 00:11:32.561 NVMe Specification Version (VS): 1.3 00:11:32.561 NVMe Specification Version (Identify): 1.3 00:11:32.561 Maximum Queue Entries: 256 00:11:32.561 Contiguous Queues Required: Yes 00:11:32.561 Arbitration Mechanisms Supported 00:11:32.561 Weighted Round Robin: Not Supported 00:11:32.561 Vendor Specific: Not Supported 00:11:32.561 Reset Timeout: 15000 ms 00:11:32.561 Doorbell Stride: 4 bytes 00:11:32.561 NVM Subsystem Reset: Not Supported 00:11:32.561 Command Sets Supported 00:11:32.561 NVM Command Set: Supported 00:11:32.561 Boot Partition: Not Supported 00:11:32.561 Memory Page Size Minimum: 4096 bytes 00:11:32.561 Memory Page Size Maximum: 4096 bytes 00:11:32.561 Persistent Memory Region: Not Supported 00:11:32.561 Optional Asynchronous Events Supported 00:11:32.561 Namespace Attribute Notices: Supported 00:11:32.561 Firmware Activation Notices: Not Supported 00:11:32.561 ANA Change Notices: Not Supported 00:11:32.561 PLE Aggregate Log Change Notices: Not Supported 00:11:32.561 LBA Status Info Alert Notices: Not Supported 00:11:32.561 EGE Aggregate Log Change Notices: Not Supported 00:11:32.561 Normal NVM Subsystem Shutdown event: Not Supported 00:11:32.561 Zone Descriptor Change Notices: Not Supported 00:11:32.561 Discovery Log Change Notices: Not Supported 00:11:32.561 Controller Attributes 00:11:32.561 128-bit Host Identifier: Supported 00:11:32.561 Non-Operational Permissive Mode: Not Supported 00:11:32.561 NVM Sets: Not Supported 00:11:32.561 Read Recovery Levels: Not Supported 00:11:32.561 Endurance Groups: Not Supported 00:11:32.561 Predictable Latency Mode: Not Supported 00:11:32.561 Traffic Based Keep ALive: Not Supported 00:11:32.561 Namespace Granularity: Not Supported 00:11:32.561 SQ Associations: Not Supported 00:11:32.561 UUID List: Not Supported 00:11:32.561 Multi-Domain Subsystem: Not Supported 00:11:32.561 Fixed Capacity Management: Not Supported 00:11:32.561 Variable Capacity Management: Not Supported 00:11:32.561 Delete Endurance Group: Not Supported 00:11:32.561 Delete NVM Set: Not Supported 00:11:32.561 Extended LBA Formats Supported: Not Supported 00:11:32.561 Flexible Data Placement Supported: Not Supported 00:11:32.561 00:11:32.561 Controller Memory Buffer Support 00:11:32.561 ================================ 00:11:32.561 Supported: No 00:11:32.561 00:11:32.561 Persistent Memory Region Support 00:11:32.561 ================================ 00:11:32.561 Supported: No 00:11:32.561 00:11:32.561 Admin Command Set Attributes 00:11:32.561 ============================ 00:11:32.561 Security Send/Receive: Not Supported 00:11:32.561 Format NVM: Not Supported 00:11:32.561 Firmware Activate/Download: Not Supported 00:11:32.561 Namespace Management: Not Supported 00:11:32.561 Device Self-Test: Not Supported 00:11:32.561 Directives: Not Supported 00:11:32.561 NVMe-MI: Not Supported 00:11:32.561 Virtualization Management: Not Supported 00:11:32.561 Doorbell Buffer Config: Not Supported 00:11:32.561 Get LBA Status Capability: Not Supported 00:11:32.561 Command & Feature Lockdown Capability: Not Supported 00:11:32.561 Abort Command Limit: 4 00:11:32.561 Async Event Request Limit: 4 00:11:32.561 Number of Firmware Slots: N/A 00:11:32.561 Firmware Slot 1 Read-Only: N/A 00:11:32.561 Firmware Activation Without Reset: N/A 00:11:32.561 Multiple Update Detection Support: N/A 00:11:32.561 Firmware Update Granularity: No Information Provided 00:11:32.561 Per-Namespace SMART Log: No 00:11:32.561 Asymmetric Namespace Access Log Page: Not Supported 00:11:32.561 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:11:32.561 Command Effects Log Page: Supported 00:11:32.561 Get Log Page Extended Data: Supported 00:11:32.561 Telemetry Log Pages: Not Supported 00:11:32.561 Persistent Event Log Pages: Not Supported 00:11:32.561 Supported Log Pages Log Page: May Support 00:11:32.561 Commands Supported & Effects Log Page: Not Supported 00:11:32.561 Feature Identifiers & Effects Log Page:May Support 00:11:32.561 NVMe-MI Commands & Effects Log Page: May Support 00:11:32.562 Data Area 4 for Telemetry Log: Not Supported 00:11:32.562 Error Log Page Entries Supported: 128 00:11:32.562 Keep Alive: Supported 00:11:32.562 Keep Alive Granularity: 10000 ms 00:11:32.562 00:11:32.562 NVM Command Set Attributes 00:11:32.562 ========================== 00:11:32.562 Submission Queue Entry Size 00:11:32.562 Max: 64 00:11:32.562 Min: 64 00:11:32.562 Completion Queue Entry Size 00:11:32.562 Max: 16 00:11:32.562 Min: 16 00:11:32.562 Number of Namespaces: 32 00:11:32.562 Compare Command: Supported 00:11:32.562 Write Uncorrectable Command: Not Supported 00:11:32.562 Dataset Management Command: Supported 00:11:32.562 Write Zeroes Command: Supported 00:11:32.562 Set Features Save Field: Not Supported 00:11:32.562 Reservations: Not Supported 00:11:32.562 Timestamp: Not Supported 00:11:32.562 Copy: Supported 00:11:32.562 Volatile Write Cache: Present 00:11:32.562 Atomic Write Unit (Normal): 1 00:11:32.562 Atomic Write Unit (PFail): 1 00:11:32.562 Atomic Compare & Write Unit: 1 00:11:32.562 Fused Compare & Write: Supported 00:11:32.562 Scatter-Gather List 00:11:32.562 SGL Command Set: Supported (Dword aligned) 00:11:32.562 SGL Keyed: Not Supported 00:11:32.562 SGL Bit Bucket Descriptor: Not Supported 00:11:32.562 SGL Metadata Pointer: Not Supported 00:11:32.562 Oversized SGL: Not Supported 00:11:32.562 SGL Metadata Address: Not Supported 00:11:32.562 SGL Offset: Not Supported 00:11:32.562 Transport SGL Data Block: Not Supported 00:11:32.562 Replay Protected Memory Block: Not Supported 00:11:32.562 00:11:32.562 Firmware Slot Information 00:11:32.562 ========================= 00:11:32.562 Active slot: 1 00:11:32.562 Slot 1 Firmware Revision: 24.09 00:11:32.562 00:11:32.562 00:11:32.562 Commands Supported and Effects 00:11:32.562 ============================== 00:11:32.562 Admin Commands 00:11:32.562 -------------- 00:11:32.562 Get Log Page (02h): Supported 00:11:32.562 Identify (06h): Supported 00:11:32.562 Abort (08h): Supported 00:11:32.562 Set Features (09h): Supported 00:11:32.562 Get Features (0Ah): Supported 00:11:32.562 Asynchronous Event Request (0Ch): Supported 00:11:32.562 Keep Alive (18h): Supported 00:11:32.562 I/O Commands 00:11:32.562 ------------ 00:11:32.562 Flush (00h): Supported LBA-Change 00:11:32.562 Write (01h): Supported LBA-Change 00:11:32.562 Read (02h): Supported 00:11:32.562 Compare (05h): Supported 00:11:32.562 Write Zeroes (08h): Supported LBA-Change 00:11:32.562 Dataset Management (09h): Supported LBA-Change 00:11:32.562 Copy (19h): Supported LBA-Change 00:11:32.562 00:11:32.562 Error Log 00:11:32.562 ========= 00:11:32.562 00:11:32.562 Arbitration 00:11:32.562 =========== 00:11:32.562 Arbitration Burst: 1 00:11:32.562 00:11:32.562 Power Management 00:11:32.562 ================ 00:11:32.562 Number of Power States: 1 00:11:32.562 Current Power State: Power State #0 00:11:32.562 Power State #0: 00:11:32.562 Max Power: 0.00 W 00:11:32.562 Non-Operational State: Operational 00:11:32.562 Entry Latency: Not Reported 00:11:32.562 Exit Latency: Not Reported 00:11:32.562 Relative Read Throughput: 0 00:11:32.562 Relative Read Latency: 0 00:11:32.562 Relative Write Throughput: 0 00:11:32.562 Relative Write Latency: 0 00:11:32.562 Idle Power: Not Reported 00:11:32.562 Active Power: Not Reported 00:11:32.562 Non-Operational Permissive Mode: Not Supported 00:11:32.562 00:11:32.562 Health Information 00:11:32.562 ================== 00:11:32.562 Critical Warnings: 00:11:32.562 Available Spare Space: OK 00:11:32.562 Temperature: OK 00:11:32.562 Device Reliability: OK 00:11:32.562 Read Only: No 00:11:32.562 Volatile Memory Backup: OK 00:11:32.562 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:32.562 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:32.562 Available Spare: 0% 00:11:32.562 Available Sp[2024-07-15 13:50:27.160832] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:32.562 [2024-07-15 13:50:27.160850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:32.562 [2024-07-15 13:50:27.160896] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:11:32.562 [2024-07-15 13:50:27.160915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.562 [2024-07-15 13:50:27.160927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.562 [2024-07-15 13:50:27.160938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.562 [2024-07-15 13:50:27.160948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.562 [2024-07-15 13:50:27.161427] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:32.562 [2024-07-15 13:50:27.161448] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:11:32.562 [2024-07-15 13:50:27.162428] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:32.562 [2024-07-15 13:50:27.162520] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:11:32.562 [2024-07-15 13:50:27.162534] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:11:32.562 [2024-07-15 13:50:27.163436] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:11:32.562 [2024-07-15 13:50:27.163459] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:11:32.562 [2024-07-15 13:50:27.163512] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:11:32.562 [2024-07-15 13:50:27.167748] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:32.562 are Threshold: 0% 00:11:32.562 Life Percentage Used: 0% 00:11:32.562 Data Units Read: 0 00:11:32.562 Data Units Written: 0 00:11:32.562 Host Read Commands: 0 00:11:32.562 Host Write Commands: 0 00:11:32.562 Controller Busy Time: 0 minutes 00:11:32.562 Power Cycles: 0 00:11:32.562 Power On Hours: 0 hours 00:11:32.562 Unsafe Shutdowns: 0 00:11:32.562 Unrecoverable Media Errors: 0 00:11:32.562 Lifetime Error Log Entries: 0 00:11:32.562 Warning Temperature Time: 0 minutes 00:11:32.562 Critical Temperature Time: 0 minutes 00:11:32.562 00:11:32.562 Number of Queues 00:11:32.562 ================ 00:11:32.562 Number of I/O Submission Queues: 127 00:11:32.562 Number of I/O Completion Queues: 127 00:11:32.562 00:11:32.562 Active Namespaces 00:11:32.562 ================= 00:11:32.562 Namespace ID:1 00:11:32.562 Error Recovery Timeout: Unlimited 00:11:32.562 Command Set Identifier: NVM (00h) 00:11:32.562 Deallocate: Supported 00:11:32.562 Deallocated/Unwritten Error: Not Supported 00:11:32.562 Deallocated Read Value: Unknown 00:11:32.562 Deallocate in Write Zeroes: Not Supported 00:11:32.562 Deallocated Guard Field: 0xFFFF 00:11:32.562 Flush: Supported 00:11:32.562 Reservation: Supported 00:11:32.562 Namespace Sharing Capabilities: Multiple Controllers 00:11:32.562 Size (in LBAs): 131072 (0GiB) 00:11:32.562 Capacity (in LBAs): 131072 (0GiB) 00:11:32.562 Utilization (in LBAs): 131072 (0GiB) 00:11:32.562 NGUID: B51F7163289244A18F95D3B1196B493F 00:11:32.562 UUID: b51f7163-2892-44a1-8f95-d3b1196b493f 00:11:32.562 Thin Provisioning: Not Supported 00:11:32.562 Per-NS Atomic Units: Yes 00:11:32.562 Atomic Boundary Size (Normal): 0 00:11:32.562 Atomic Boundary Size (PFail): 0 00:11:32.562 Atomic Boundary Offset: 0 00:11:32.562 Maximum Single Source Range Length: 65535 00:11:32.562 Maximum Copy Length: 65535 00:11:32.562 Maximum Source Range Count: 1 00:11:32.562 NGUID/EUI64 Never Reused: No 00:11:32.562 Namespace Write Protected: No 00:11:32.562 Number of LBA Formats: 1 00:11:32.562 Current LBA Format: LBA Format #00 00:11:32.562 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:32.562 00:11:32.562 13:50:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:32.562 EAL: No free 2048 kB hugepages reported on node 1 00:11:32.820 [2024-07-15 13:50:27.398591] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:38.089 Initializing NVMe Controllers 00:11:38.089 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:38.089 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:38.089 Initialization complete. Launching workers. 00:11:38.089 ======================================================== 00:11:38.089 Latency(us) 00:11:38.089 Device Information : IOPS MiB/s Average min max 00:11:38.089 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34670.54 135.43 3691.38 1160.67 9580.43 00:11:38.089 ======================================================== 00:11:38.089 Total : 34670.54 135.43 3691.38 1160.67 9580.43 00:11:38.089 00:11:38.089 [2024-07-15 13:50:32.419231] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:38.089 13:50:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:38.089 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.089 [2024-07-15 13:50:32.660362] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:43.360 Initializing NVMe Controllers 00:11:43.360 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:43.360 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:43.360 Initialization complete. Launching workers. 00:11:43.360 ======================================================== 00:11:43.360 Latency(us) 00:11:43.360 Device Information : IOPS MiB/s Average min max 00:11:43.360 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15931.60 62.23 8044.11 5996.43 15827.31 00:11:43.360 ======================================================== 00:11:43.360 Total : 15931.60 62.23 8044.11 5996.43 15827.31 00:11:43.360 00:11:43.360 [2024-07-15 13:50:37.696587] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:43.360 13:50:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:43.360 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.360 [2024-07-15 13:50:37.920742] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:48.657 [2024-07-15 13:50:42.982059] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:48.657 Initializing NVMe Controllers 00:11:48.657 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:48.657 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:48.657 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:11:48.657 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:11:48.657 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:11:48.657 Initialization complete. Launching workers. 00:11:48.657 Starting thread on core 2 00:11:48.657 Starting thread on core 3 00:11:48.657 Starting thread on core 1 00:11:48.657 13:50:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:11:48.657 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.657 [2024-07-15 13:50:43.301203] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:51.942 [2024-07-15 13:50:46.372531] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:51.942 Initializing NVMe Controllers 00:11:51.942 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:51.942 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:51.942 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:11:51.942 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:11:51.942 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:11:51.942 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:11:51.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:51.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:51.942 Initialization complete. Launching workers. 00:11:51.942 Starting thread on core 1 with urgent priority queue 00:11:51.942 Starting thread on core 2 with urgent priority queue 00:11:51.942 Starting thread on core 3 with urgent priority queue 00:11:51.942 Starting thread on core 0 with urgent priority queue 00:11:51.942 SPDK bdev Controller (SPDK1 ) core 0: 5053.33 IO/s 19.79 secs/100000 ios 00:11:51.942 SPDK bdev Controller (SPDK1 ) core 1: 4912.33 IO/s 20.36 secs/100000 ios 00:11:51.942 SPDK bdev Controller (SPDK1 ) core 2: 5115.33 IO/s 19.55 secs/100000 ios 00:11:51.942 SPDK bdev Controller (SPDK1 ) core 3: 5226.33 IO/s 19.13 secs/100000 ios 00:11:51.942 ======================================================== 00:11:51.942 00:11:51.942 13:50:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:51.942 EAL: No free 2048 kB hugepages reported on node 1 00:11:51.942 [2024-07-15 13:50:46.677528] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:51.942 Initializing NVMe Controllers 00:11:51.942 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:51.942 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:51.942 Namespace ID: 1 size: 0GB 00:11:51.942 Initialization complete. 00:11:51.942 INFO: using host memory buffer for IO 00:11:51.942 Hello world! 00:11:51.942 [2024-07-15 13:50:46.711153] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:51.942 13:50:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:52.202 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.202 [2024-07-15 13:50:47.011267] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:53.579 Initializing NVMe Controllers 00:11:53.579 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:53.579 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:53.579 Initialization complete. Launching workers. 00:11:53.579 submit (in ns) avg, min, max = 6917.8, 3488.9, 4016715.6 00:11:53.579 complete (in ns) avg, min, max = 27746.3, 2063.3, 4017261.1 00:11:53.579 00:11:53.579 Submit histogram 00:11:53.579 ================ 00:11:53.579 Range in us Cumulative Count 00:11:53.579 3.484 - 3.508: 0.1426% ( 19) 00:11:53.579 3.508 - 3.532: 0.8029% ( 88) 00:11:53.579 3.532 - 3.556: 2.5062% ( 227) 00:11:53.579 3.556 - 3.579: 6.6031% ( 546) 00:11:53.579 3.579 - 3.603: 13.2513% ( 886) 00:11:53.579 3.603 - 3.627: 21.7378% ( 1131) 00:11:53.579 3.627 - 3.650: 31.1023% ( 1248) 00:11:53.579 3.650 - 3.674: 39.1536% ( 1073) 00:11:53.579 3.674 - 3.698: 46.7847% ( 1017) 00:11:53.579 3.698 - 3.721: 52.8626% ( 810) 00:11:53.579 3.721 - 3.745: 57.3272% ( 595) 00:11:53.579 3.745 - 3.769: 61.2441% ( 522) 00:11:53.579 3.769 - 3.793: 64.6357% ( 452) 00:11:53.579 3.793 - 3.816: 67.9823% ( 446) 00:11:53.579 3.816 - 3.840: 71.4264% ( 459) 00:11:53.579 3.840 - 3.864: 75.5834% ( 554) 00:11:53.579 3.864 - 3.887: 79.6428% ( 541) 00:11:53.579 3.887 - 3.911: 82.7118% ( 409) 00:11:53.579 3.911 - 3.935: 85.8483% ( 418) 00:11:53.579 3.935 - 3.959: 87.7842% ( 258) 00:11:53.579 3.959 - 3.982: 89.3900% ( 214) 00:11:53.579 3.982 - 4.006: 90.7556% ( 182) 00:11:53.579 4.006 - 4.030: 91.9337% ( 157) 00:11:53.579 4.030 - 4.053: 93.0217% ( 145) 00:11:53.579 4.053 - 4.077: 93.9746% ( 127) 00:11:53.579 4.077 - 4.101: 94.7625% ( 105) 00:11:53.579 4.101 - 4.124: 95.4904% ( 97) 00:11:53.579 4.124 - 4.148: 95.9406% ( 60) 00:11:53.579 4.148 - 4.172: 96.3308% ( 52) 00:11:53.579 4.172 - 4.196: 96.5784% ( 33) 00:11:53.579 4.196 - 4.219: 96.7810% ( 27) 00:11:53.579 4.219 - 4.243: 96.9536% ( 23) 00:11:53.579 4.243 - 4.267: 97.0586% ( 14) 00:11:53.579 4.267 - 4.290: 97.2387% ( 24) 00:11:53.579 4.290 - 4.314: 97.3512% ( 15) 00:11:53.580 4.314 - 4.338: 97.4863% ( 18) 00:11:53.580 4.338 - 4.361: 97.5538% ( 9) 00:11:53.580 4.361 - 4.385: 97.6064% ( 7) 00:11:53.580 4.385 - 4.409: 97.6514% ( 6) 00:11:53.580 4.409 - 4.433: 97.7039% ( 7) 00:11:53.580 4.433 - 4.456: 97.7189% ( 2) 00:11:53.580 4.456 - 4.480: 97.7339% ( 2) 00:11:53.580 4.480 - 4.504: 97.7489% ( 2) 00:11:53.580 4.504 - 4.527: 97.7564% ( 1) 00:11:53.580 4.527 - 4.551: 97.7639% ( 1) 00:11:53.580 4.575 - 4.599: 97.7714% ( 1) 00:11:53.580 4.599 - 4.622: 97.7864% ( 2) 00:11:53.580 4.622 - 4.646: 97.8015% ( 2) 00:11:53.580 4.646 - 4.670: 97.8465% ( 6) 00:11:53.580 4.670 - 4.693: 97.8615% ( 2) 00:11:53.580 4.693 - 4.717: 97.9290% ( 9) 00:11:53.580 4.717 - 4.741: 97.9740% ( 6) 00:11:53.580 4.741 - 4.764: 98.0341% ( 8) 00:11:53.580 4.764 - 4.788: 98.0791% ( 6) 00:11:53.580 4.788 - 4.812: 98.1391% ( 8) 00:11:53.580 4.812 - 4.836: 98.1841% ( 6) 00:11:53.580 4.836 - 4.859: 98.2367% ( 7) 00:11:53.580 4.859 - 4.883: 98.2667% ( 4) 00:11:53.580 4.883 - 4.907: 98.3342% ( 9) 00:11:53.580 4.907 - 4.930: 98.3417% ( 1) 00:11:53.580 4.930 - 4.954: 98.3717% ( 4) 00:11:53.580 4.954 - 4.978: 98.4092% ( 5) 00:11:53.580 4.978 - 5.001: 98.4243% ( 2) 00:11:53.580 5.001 - 5.025: 98.4543% ( 4) 00:11:53.580 5.025 - 5.049: 98.4618% ( 1) 00:11:53.580 5.049 - 5.073: 98.4993% ( 5) 00:11:53.580 5.073 - 5.096: 98.5143% ( 2) 00:11:53.580 5.096 - 5.120: 98.5293% ( 2) 00:11:53.580 5.144 - 5.167: 98.5518% ( 3) 00:11:53.580 5.167 - 5.191: 98.5668% ( 2) 00:11:53.580 5.191 - 5.215: 98.5818% ( 2) 00:11:53.580 5.215 - 5.239: 98.5893% ( 1) 00:11:53.580 5.310 - 5.333: 98.5968% ( 1) 00:11:53.580 5.357 - 5.381: 98.6043% ( 1) 00:11:53.580 5.452 - 5.476: 98.6193% ( 2) 00:11:53.580 5.594 - 5.618: 98.6268% ( 1) 00:11:53.580 5.689 - 5.713: 98.6344% ( 1) 00:11:53.580 5.736 - 5.760: 98.6419% ( 1) 00:11:53.580 5.902 - 5.926: 98.6494% ( 1) 00:11:53.580 5.926 - 5.950: 98.6569% ( 1) 00:11:53.580 6.400 - 6.447: 98.6644% ( 1) 00:11:53.580 6.969 - 7.016: 98.6719% ( 1) 00:11:53.580 7.064 - 7.111: 98.6794% ( 1) 00:11:53.580 7.443 - 7.490: 98.6869% ( 1) 00:11:53.580 7.490 - 7.538: 98.6944% ( 1) 00:11:53.580 7.633 - 7.680: 98.7019% ( 1) 00:11:53.580 7.680 - 7.727: 98.7169% ( 2) 00:11:53.580 7.727 - 7.775: 98.7244% ( 1) 00:11:53.580 7.775 - 7.822: 98.7319% ( 1) 00:11:53.580 7.822 - 7.870: 98.7394% ( 1) 00:11:53.580 7.964 - 8.012: 98.7469% ( 1) 00:11:53.580 8.059 - 8.107: 98.7544% ( 1) 00:11:53.580 8.154 - 8.201: 98.7619% ( 1) 00:11:53.580 8.201 - 8.249: 98.7694% ( 1) 00:11:53.580 8.344 - 8.391: 98.7769% ( 1) 00:11:53.580 8.391 - 8.439: 98.7844% ( 1) 00:11:53.580 8.865 - 8.913: 98.7919% ( 1) 00:11:53.580 8.960 - 9.007: 98.7994% ( 1) 00:11:53.580 9.055 - 9.102: 98.8069% ( 1) 00:11:53.580 9.102 - 9.150: 98.8144% ( 1) 00:11:53.580 9.197 - 9.244: 98.8219% ( 1) 00:11:53.580 9.481 - 9.529: 98.8294% ( 1) 00:11:53.580 9.529 - 9.576: 98.8520% ( 3) 00:11:53.580 9.576 - 9.624: 98.8595% ( 1) 00:11:53.580 9.624 - 9.671: 98.8670% ( 1) 00:11:53.580 9.766 - 9.813: 98.8745% ( 1) 00:11:53.580 9.861 - 9.908: 98.8820% ( 1) 00:11:53.580 9.908 - 9.956: 98.8895% ( 1) 00:11:53.580 9.956 - 10.003: 98.9045% ( 2) 00:11:53.580 10.193 - 10.240: 98.9120% ( 1) 00:11:53.580 10.714 - 10.761: 98.9195% ( 1) 00:11:53.580 10.761 - 10.809: 98.9345% ( 2) 00:11:53.580 10.856 - 10.904: 98.9420% ( 1) 00:11:53.580 10.999 - 11.046: 98.9495% ( 1) 00:11:53.580 11.093 - 11.141: 98.9570% ( 1) 00:11:53.580 11.188 - 11.236: 98.9645% ( 1) 00:11:53.580 11.520 - 11.567: 98.9720% ( 1) 00:11:53.580 11.804 - 11.852: 98.9795% ( 1) 00:11:53.580 12.136 - 12.231: 98.9870% ( 1) 00:11:53.580 13.274 - 13.369: 98.9945% ( 1) 00:11:53.580 13.559 - 13.653: 99.0020% ( 1) 00:11:53.580 13.653 - 13.748: 99.0095% ( 1) 00:11:53.580 14.981 - 15.076: 99.0170% ( 1) 00:11:53.580 17.351 - 17.446: 99.0395% ( 3) 00:11:53.580 17.446 - 17.541: 99.0621% ( 3) 00:11:53.580 17.541 - 17.636: 99.0771% ( 2) 00:11:53.580 17.636 - 17.730: 99.1071% ( 4) 00:11:53.580 17.730 - 17.825: 99.1671% ( 8) 00:11:53.580 17.825 - 17.920: 99.2196% ( 7) 00:11:53.580 17.920 - 18.015: 99.2647% ( 6) 00:11:53.580 18.015 - 18.110: 99.3022% ( 5) 00:11:53.580 18.110 - 18.204: 99.3697% ( 9) 00:11:53.580 18.204 - 18.299: 99.4447% ( 10) 00:11:53.580 18.299 - 18.394: 99.5198% ( 10) 00:11:53.580 18.394 - 18.489: 99.5798% ( 8) 00:11:53.580 18.489 - 18.584: 99.6698% ( 12) 00:11:53.580 18.584 - 18.679: 99.6999% ( 4) 00:11:53.580 18.679 - 18.773: 99.7524% ( 7) 00:11:53.580 18.773 - 18.868: 99.7749% ( 3) 00:11:53.580 18.868 - 18.963: 99.7899% ( 2) 00:11:53.580 18.963 - 19.058: 99.8274% ( 5) 00:11:53.580 19.058 - 19.153: 99.8424% ( 2) 00:11:53.580 19.153 - 19.247: 99.8574% ( 2) 00:11:53.580 19.247 - 19.342: 99.8649% ( 1) 00:11:53.580 19.532 - 19.627: 99.8799% ( 2) 00:11:53.580 19.627 - 19.721: 99.8874% ( 1) 00:11:53.580 19.816 - 19.911: 99.8950% ( 1) 00:11:53.580 20.385 - 20.480: 99.9025% ( 1) 00:11:53.580 20.859 - 20.954: 99.9100% ( 1) 00:11:53.580 23.609 - 23.704: 99.9175% ( 1) 00:11:53.580 24.462 - 24.652: 99.9250% ( 1) 00:11:53.580 3980.705 - 4004.978: 99.9850% ( 8) 00:11:53.580 4004.978 - 4029.250: 100.0000% ( 2) 00:11:53.580 00:11:53.580 Complete histogram 00:11:53.580 ================== 00:11:53.580 Range in us Cumulative Count 00:11:53.580 2.062 - 2.074: 0.6753% ( 90) 00:11:53.580 2.074 - 2.086: 30.2844% ( 3946) 00:11:53.580 2.086 - 2.098: 40.1966% ( 1321) 00:11:53.580 2.098 - 2.110: 43.9634% ( 502) 00:11:53.580 2.110 - 2.121: 58.2051% ( 1898) 00:11:53.580 2.121 - 2.133: 60.8014% ( 346) 00:11:53.580 2.133 - 2.145: 64.5607% ( 501) 00:11:53.580 2.145 - 2.157: 73.9251% ( 1248) 00:11:53.580 2.157 - 2.169: 75.6734% ( 233) 00:11:53.580 2.169 - 2.181: 78.1196% ( 326) 00:11:53.580 2.181 - 2.193: 81.9464% ( 510) 00:11:53.580 2.193 - 2.204: 82.9369% ( 132) 00:11:53.580 2.204 - 2.216: 83.9799% ( 139) 00:11:53.580 2.216 - 2.228: 87.5816% ( 480) 00:11:53.580 2.228 - 2.240: 89.4050% ( 243) 00:11:53.580 2.240 - 2.252: 91.2058% ( 240) 00:11:53.580 2.252 - 2.264: 93.2768% ( 276) 00:11:53.580 2.264 - 2.276: 93.8246% ( 73) 00:11:53.580 2.276 - 2.287: 94.1547% ( 44) 00:11:53.580 2.287 - 2.299: 94.4549% ( 40) 00:11:53.580 2.299 - 2.311: 94.9426% ( 65) 00:11:53.580 2.311 - 2.323: 95.5429% ( 80) 00:11:53.580 2.323 - 2.335: 95.6554% ( 15) 00:11:53.580 2.335 - 2.347: 95.7080% ( 7) 00:11:53.580 2.347 - 2.359: 95.7455% ( 5) 00:11:53.580 2.359 - 2.370: 95.8505% ( 14) 00:11:53.580 2.370 - 2.382: 96.0531% ( 27) 00:11:53.580 2.382 - 2.394: 96.4058% ( 47) 00:11:53.580 2.394 - 2.406: 96.7735% ( 49) 00:11:53.580 2.406 - 2.418: 97.0286% ( 34) 00:11:53.580 2.418 - 2.430: 97.2837% ( 34) 00:11:53.580 2.430 - 2.441: 97.4263% ( 19) 00:11:53.580 2.441 - 2.453: 97.5163% ( 12) 00:11:53.580 2.453 - 2.465: 97.6964% ( 24) 00:11:53.580 2.465 - 2.477: 97.8690% ( 23) 00:11:53.580 2.477 - 2.489: 97.9665% ( 13) 00:11:53.580 2.489 - 2.501: 98.0791% ( 15) 00:11:53.580 2.501 - 2.513: 98.1466% ( 9) 00:11:53.580 2.513 - 2.524: 98.1691% ( 3) 00:11:53.580 2.524 - 2.536: 98.2292% ( 8) 00:11:53.580 2.536 - 2.548: 98.2592% ( 4) 00:11:53.580 2.548 - 2.560: 98.3042% ( 6) 00:11:53.580 2.560 - 2.572: 98.3342% ( 4) 00:11:53.580 2.572 - 2.584: 98.3642% ( 4) 00:11:53.580 2.584 - 2.596: 98.3792% ( 2) 00:11:53.580 2.596 - 2.607: 98.3867% ( 1) 00:11:53.580 2.631 - 2.643: 98.3942% ( 1) 00:11:53.580 2.643 - 2.655: 98.4017% ( 1) 00:11:53.580 2.655 - 2.667: 98.4092% ( 1) 00:11:53.580 2.679 - 2.690: 98.4243% ( 2) 00:11:53.580 2.726 - 2.738: 98.4318% ( 1) 00:11:53.580 2.761 - 2.773: 98.4393% ( 1) 00:11:53.580 2.821 - 2.833: 98.4468% ( 1) 00:11:53.580 3.295 - 3.319: 98.4543% ( 1) 00:11:53.580 3.319 - 3.342: 98.4618% ( 1) 00:11:53.580 3.342 - 3.366: 98.4768% ( 2) 00:11:53.580 3.366 - 3.390: 98.4843% ( 1) 00:11:53.580 3.390 - 3.413: 98.5068% ( 3) 00:11:53.580 3.413 - 3.437: 98.5143% ( 1) 00:11:53.580 3.484 - 3.508: 98.5218% ( 1) 00:11:53.580 3.508 - 3.532: 98.5293% ( 1) 00:11:53.580 3.556 - 3.579: 98.5443% ( 2) 00:11:53.580 3.579 - 3.603: 98.5593% ( 2) 00:11:53.580 3.674 - 3.698: 98.5668% ( 1) 00:11:53.580 3.721 - 3.745: 98.5818% ( 2) 00:11:53.580 3.769 - 3.793: 98.5968% ( 2) 00:11:53.580 3.793 - 3.816: 98.6043% ( 1) 00:11:53.580 3.864 - 3.887: 98.6118% ( 1) 00:11:53.580 3.935 - 3.959: 98.6268% ( 2) 00:11:53.580 4.124 - 4.148: 98.6419% ( 2) 00:11:53.580 4.456 - 4.480: 98.6494% ( 1) 00:11:53.580 5.594 - 5.618: 98.6644% ( 2) 00:11:53.580 5.665 - 5.689: 98.6719% ( 1) 00:11:53.580 5.736 - 5.760: 98.6794% ( 1) 00:11:53.580 6.258 - 6.305: 98.6869% ( 1) 00:11:53.580 6.542 - 6.590: 98.6944% ( 1) 00:11:53.580 6.684 - 6.732: 98.7019% ( 1) 00:11:53.581 6.732 - 6.779: 98.7094% ( 1) 00:11:53.581 6.874 - 6.921: 98.7169% ( 1) 00:11:53.581 7.064 - 7.111: 98.7319% ( 2) 00:11:53.581 7.159 - 7.206: 98.7394% ( 1) 00:11:53.581 7.633 - 7.680: 98.7469% ( 1) 00:11:53.581 7.680 - 7.727: 98.7619% ( 2) 00:11:53.581 8.628 - 8.676: 98.7694% ( 1) 00:11:53.581 8.723 - 8.770: 98.7769% ( 1) 00:11:53.581 9.292 - 9.339: 98.7844% ( 1) 00:11:53.581 9.434 - 9.481: 98.7919% ( 1) 00:11:53.581 10.809 - 10.856: 98.7994% ( 1) 00:11:53.581 11.188 - 11.236: 98.8069% ( 1) 00:11:53.581 11.804 - 11.852: 98.8144% ( 1) 00:11:53.581 13.464 - 13.559: 98.8219% ( 1) 00:11:53.581 15.170 - 15.265: 98.8294% ( 1) 00:11:53.581 15.455 - 15.550: 98.8445% ( 2) 00:11:53.581 15.644 - 15.739: 98.8520% ( 1) 00:11:53.581 15.739 - 15.834: 98.8820% ( 4) 00:11:53.581 15.834 - 15.929: 98.8895% ( 1) 00:11:53.581 15.929 - 16.024: 98.9045% ( 2) 00:11:53.581 16.024 - 16.119: 98.9345% ( 4) 00:11:53.581 16.119 - 16.213: 9[2024-07-15 13:50:48.033470] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:53.581 8.9795% ( 6) 00:11:53.581 16.213 - 16.308: 99.0020% ( 3) 00:11:53.581 16.308 - 16.403: 99.0320% ( 4) 00:11:53.581 16.403 - 16.498: 99.0621% ( 4) 00:11:53.581 16.498 - 16.593: 99.0921% ( 4) 00:11:53.581 16.593 - 16.687: 99.1371% ( 6) 00:11:53.581 16.687 - 16.782: 99.1746% ( 5) 00:11:53.581 16.782 - 16.877: 99.2196% ( 6) 00:11:53.581 16.877 - 16.972: 99.2346% ( 2) 00:11:53.581 16.972 - 17.067: 99.2797% ( 6) 00:11:53.581 17.161 - 17.256: 99.3097% ( 4) 00:11:53.581 17.256 - 17.351: 99.3172% ( 1) 00:11:53.581 17.351 - 17.446: 99.3247% ( 1) 00:11:53.581 17.541 - 17.636: 99.3322% ( 1) 00:11:53.581 17.730 - 17.825: 99.3472% ( 2) 00:11:53.581 17.920 - 18.015: 99.3547% ( 1) 00:11:53.581 20.859 - 20.954: 99.3622% ( 1) 00:11:53.581 3980.705 - 4004.978: 99.8574% ( 66) 00:11:53.581 4004.978 - 4029.250: 100.0000% ( 19) 00:11:53.581 00:11:53.581 13:50:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:11:53.581 13:50:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:53.581 13:50:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:11:53.581 13:50:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:11:53.581 13:50:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:53.581 [ 00:11:53.581 { 00:11:53.581 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:53.581 "subtype": "Discovery", 00:11:53.581 "listen_addresses": [], 00:11:53.581 "allow_any_host": true, 00:11:53.581 "hosts": [] 00:11:53.581 }, 00:11:53.581 { 00:11:53.581 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:53.581 "subtype": "NVMe", 00:11:53.581 "listen_addresses": [ 00:11:53.581 { 00:11:53.581 "trtype": "VFIOUSER", 00:11:53.581 "adrfam": "IPv4", 00:11:53.581 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:53.581 "trsvcid": "0" 00:11:53.581 } 00:11:53.581 ], 00:11:53.581 "allow_any_host": true, 00:11:53.581 "hosts": [], 00:11:53.581 "serial_number": "SPDK1", 00:11:53.581 "model_number": "SPDK bdev Controller", 00:11:53.581 "max_namespaces": 32, 00:11:53.581 "min_cntlid": 1, 00:11:53.581 "max_cntlid": 65519, 00:11:53.581 "namespaces": [ 00:11:53.581 { 00:11:53.581 "nsid": 1, 00:11:53.581 "bdev_name": "Malloc1", 00:11:53.581 "name": "Malloc1", 00:11:53.581 "nguid": "B51F7163289244A18F95D3B1196B493F", 00:11:53.581 "uuid": "b51f7163-2892-44a1-8f95-d3b1196b493f" 00:11:53.581 } 00:11:53.581 ] 00:11:53.581 }, 00:11:53.581 { 00:11:53.581 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:53.581 "subtype": "NVMe", 00:11:53.581 "listen_addresses": [ 00:11:53.581 { 00:11:53.581 "trtype": "VFIOUSER", 00:11:53.581 "adrfam": "IPv4", 00:11:53.581 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:53.581 "trsvcid": "0" 00:11:53.581 } 00:11:53.581 ], 00:11:53.581 "allow_any_host": true, 00:11:53.581 "hosts": [], 00:11:53.581 "serial_number": "SPDK2", 00:11:53.581 "model_number": "SPDK bdev Controller", 00:11:53.581 "max_namespaces": 32, 00:11:53.581 "min_cntlid": 1, 00:11:53.581 "max_cntlid": 65519, 00:11:53.581 "namespaces": [ 00:11:53.581 { 00:11:53.581 "nsid": 1, 00:11:53.581 "bdev_name": "Malloc2", 00:11:53.581 "name": "Malloc2", 00:11:53.581 "nguid": "A33F985EDADF4BFE8C09E8756CFBE88E", 00:11:53.581 "uuid": "a33f985e-dadf-4bfe-8c09-e8756cfbe88e" 00:11:53.581 } 00:11:53.581 ] 00:11:53.581 } 00:11:53.581 ] 00:11:53.581 13:50:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:53.581 13:50:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3701690 00:11:53.581 13:50:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:11:53.581 13:50:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:53.581 13:50:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:11:53.581 13:50:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:53.581 13:50:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:53.581 13:50:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:11:53.581 13:50:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:53.581 13:50:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:11:53.839 EAL: No free 2048 kB hugepages reported on node 1 00:11:53.839 [2024-07-15 13:50:48.541248] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:53.839 Malloc3 00:11:53.839 13:50:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:11:54.097 [2024-07-15 13:50:48.920000] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:54.097 13:50:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:54.355 Asynchronous Event Request test 00:11:54.355 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:54.355 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:54.355 Registering asynchronous event callbacks... 00:11:54.355 Starting namespace attribute notice tests for all controllers... 00:11:54.355 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:54.355 aer_cb - Changed Namespace 00:11:54.355 Cleaning up... 00:11:54.355 [ 00:11:54.355 { 00:11:54.355 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:54.355 "subtype": "Discovery", 00:11:54.355 "listen_addresses": [], 00:11:54.355 "allow_any_host": true, 00:11:54.355 "hosts": [] 00:11:54.355 }, 00:11:54.355 { 00:11:54.355 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:54.355 "subtype": "NVMe", 00:11:54.355 "listen_addresses": [ 00:11:54.355 { 00:11:54.355 "trtype": "VFIOUSER", 00:11:54.355 "adrfam": "IPv4", 00:11:54.355 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:54.355 "trsvcid": "0" 00:11:54.355 } 00:11:54.355 ], 00:11:54.355 "allow_any_host": true, 00:11:54.355 "hosts": [], 00:11:54.355 "serial_number": "SPDK1", 00:11:54.355 "model_number": "SPDK bdev Controller", 00:11:54.355 "max_namespaces": 32, 00:11:54.355 "min_cntlid": 1, 00:11:54.355 "max_cntlid": 65519, 00:11:54.355 "namespaces": [ 00:11:54.355 { 00:11:54.355 "nsid": 1, 00:11:54.355 "bdev_name": "Malloc1", 00:11:54.355 "name": "Malloc1", 00:11:54.355 "nguid": "B51F7163289244A18F95D3B1196B493F", 00:11:54.355 "uuid": "b51f7163-2892-44a1-8f95-d3b1196b493f" 00:11:54.355 }, 00:11:54.355 { 00:11:54.355 "nsid": 2, 00:11:54.355 "bdev_name": "Malloc3", 00:11:54.355 "name": "Malloc3", 00:11:54.355 "nguid": "A1AC7008A6824C81982E72B24583EFB5", 00:11:54.355 "uuid": "a1ac7008-a682-4c81-982e-72b24583efb5" 00:11:54.355 } 00:11:54.355 ] 00:11:54.355 }, 00:11:54.355 { 00:11:54.355 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:54.355 "subtype": "NVMe", 00:11:54.355 "listen_addresses": [ 00:11:54.355 { 00:11:54.355 "trtype": "VFIOUSER", 00:11:54.355 "adrfam": "IPv4", 00:11:54.355 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:54.355 "trsvcid": "0" 00:11:54.355 } 00:11:54.355 ], 00:11:54.355 "allow_any_host": true, 00:11:54.355 "hosts": [], 00:11:54.355 "serial_number": "SPDK2", 00:11:54.355 "model_number": "SPDK bdev Controller", 00:11:54.355 "max_namespaces": 32, 00:11:54.355 "min_cntlid": 1, 00:11:54.355 "max_cntlid": 65519, 00:11:54.355 "namespaces": [ 00:11:54.355 { 00:11:54.355 "nsid": 1, 00:11:54.355 "bdev_name": "Malloc2", 00:11:54.355 "name": "Malloc2", 00:11:54.355 "nguid": "A33F985EDADF4BFE8C09E8756CFBE88E", 00:11:54.355 "uuid": "a33f985e-dadf-4bfe-8c09-e8756cfbe88e" 00:11:54.355 } 00:11:54.355 ] 00:11:54.355 } 00:11:54.355 ] 00:11:54.355 13:50:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3701690 00:11:54.355 13:50:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:54.355 13:50:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:54.355 13:50:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:11:54.355 13:50:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:54.615 [2024-07-15 13:50:49.207454] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:11:54.615 [2024-07-15 13:50:49.207497] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3701827 ] 00:11:54.615 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.615 [2024-07-15 13:50:49.242848] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:11:54.615 [2024-07-15 13:50:49.252972] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:54.615 [2024-07-15 13:50:49.253003] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5cfbf25000 00:11:54.615 [2024-07-15 13:50:49.253970] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:54.615 [2024-07-15 13:50:49.254990] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:54.615 [2024-07-15 13:50:49.255992] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:54.615 [2024-07-15 13:50:49.256995] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:54.615 [2024-07-15 13:50:49.258002] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:54.615 [2024-07-15 13:50:49.259007] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:54.615 [2024-07-15 13:50:49.260031] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:54.615 [2024-07-15 13:50:49.261026] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:54.615 [2024-07-15 13:50:49.262049] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:54.615 [2024-07-15 13:50:49.262069] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f5cfbf1a000 00:11:54.615 [2024-07-15 13:50:49.263180] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:54.615 [2024-07-15 13:50:49.280908] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:11:54.615 [2024-07-15 13:50:49.280943] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:11:54.615 [2024-07-15 13:50:49.283040] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:54.615 [2024-07-15 13:50:49.283092] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:54.615 [2024-07-15 13:50:49.283177] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:11:54.615 [2024-07-15 13:50:49.283198] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:11:54.615 [2024-07-15 13:50:49.283208] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:11:54.615 [2024-07-15 13:50:49.284046] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:11:54.615 [2024-07-15 13:50:49.284067] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:11:54.615 [2024-07-15 13:50:49.284079] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:11:54.615 [2024-07-15 13:50:49.285058] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:54.616 [2024-07-15 13:50:49.285079] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:11:54.616 [2024-07-15 13:50:49.285092] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:11:54.616 [2024-07-15 13:50:49.286061] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:11:54.616 [2024-07-15 13:50:49.286081] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:54.616 [2024-07-15 13:50:49.287060] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:11:54.616 [2024-07-15 13:50:49.287080] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:11:54.616 [2024-07-15 13:50:49.287089] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:11:54.616 [2024-07-15 13:50:49.287100] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:54.616 [2024-07-15 13:50:49.287209] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:11:54.616 [2024-07-15 13:50:49.287217] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:54.616 [2024-07-15 13:50:49.287225] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:11:54.616 [2024-07-15 13:50:49.288069] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:11:54.616 [2024-07-15 13:50:49.289084] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:11:54.616 [2024-07-15 13:50:49.290090] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:54.616 [2024-07-15 13:50:49.291071] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:54.616 [2024-07-15 13:50:49.291139] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:54.616 [2024-07-15 13:50:49.292103] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:11:54.616 [2024-07-15 13:50:49.292124] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:54.616 [2024-07-15 13:50:49.292148] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:11:54.616 [2024-07-15 13:50:49.292171] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:11:54.616 [2024-07-15 13:50:49.292185] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:11:54.616 [2024-07-15 13:50:49.292203] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:54.616 [2024-07-15 13:50:49.292212] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:54.616 [2024-07-15 13:50:49.292229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:54.616 [2024-07-15 13:50:49.298753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:54.616 [2024-07-15 13:50:49.298774] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:11:54.616 [2024-07-15 13:50:49.298787] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:11:54.616 [2024-07-15 13:50:49.298795] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:11:54.616 [2024-07-15 13:50:49.298803] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:54.616 [2024-07-15 13:50:49.298811] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:11:54.616 [2024-07-15 13:50:49.298818] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:11:54.616 [2024-07-15 13:50:49.298826] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:11:54.616 [2024-07-15 13:50:49.298838] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:11:54.616 [2024-07-15 13:50:49.298853] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:54.616 [2024-07-15 13:50:49.306748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:54.616 [2024-07-15 13:50:49.306776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.616 [2024-07-15 13:50:49.306791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.616 [2024-07-15 13:50:49.306803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.616 [2024-07-15 13:50:49.306815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.616 [2024-07-15 13:50:49.306823] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:11:54.616 [2024-07-15 13:50:49.306839] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:54.616 [2024-07-15 13:50:49.306854] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:54.616 [2024-07-15 13:50:49.314749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:54.616 [2024-07-15 13:50:49.314767] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:11:54.616 [2024-07-15 13:50:49.314776] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:54.616 [2024-07-15 13:50:49.314788] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:11:54.616 [2024-07-15 13:50:49.314799] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:11:54.616 [2024-07-15 13:50:49.314812] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:54.616 [2024-07-15 13:50:49.322762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:54.616 [2024-07-15 13:50:49.322837] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:11:54.616 [2024-07-15 13:50:49.322853] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:11:54.616 [2024-07-15 13:50:49.322866] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:54.616 [2024-07-15 13:50:49.322874] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:54.616 [2024-07-15 13:50:49.322884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:54.616 [2024-07-15 13:50:49.330762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:54.616 [2024-07-15 13:50:49.330784] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:11:54.616 [2024-07-15 13:50:49.330805] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:11:54.616 [2024-07-15 13:50:49.330819] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:11:54.616 [2024-07-15 13:50:49.330832] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:54.616 [2024-07-15 13:50:49.330840] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:54.616 [2024-07-15 13:50:49.330850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:54.616 [2024-07-15 13:50:49.338747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:54.616 [2024-07-15 13:50:49.338773] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:54.616 [2024-07-15 13:50:49.338789] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:54.616 [2024-07-15 13:50:49.338803] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:54.616 [2024-07-15 13:50:49.338811] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:54.616 [2024-07-15 13:50:49.338821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:54.616 [2024-07-15 13:50:49.346765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:54.616 [2024-07-15 13:50:49.346785] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:54.616 [2024-07-15 13:50:49.346797] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:11:54.616 [2024-07-15 13:50:49.346812] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:11:54.616 [2024-07-15 13:50:49.346822] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:11:54.616 [2024-07-15 13:50:49.346831] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:54.616 [2024-07-15 13:50:49.346839] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:11:54.616 [2024-07-15 13:50:49.346851] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:11:54.616 [2024-07-15 13:50:49.346859] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:11:54.616 [2024-07-15 13:50:49.346867] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:11:54.616 [2024-07-15 13:50:49.346890] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:54.616 [2024-07-15 13:50:49.354746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:54.616 [2024-07-15 13:50:49.354787] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:54.616 [2024-07-15 13:50:49.362748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:54.616 [2024-07-15 13:50:49.362773] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:54.616 [2024-07-15 13:50:49.370763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:54.616 [2024-07-15 13:50:49.370788] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:54.617 [2024-07-15 13:50:49.378764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:54.617 [2024-07-15 13:50:49.378797] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:54.617 [2024-07-15 13:50:49.378808] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:54.617 [2024-07-15 13:50:49.378814] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:54.617 [2024-07-15 13:50:49.378820] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:54.617 [2024-07-15 13:50:49.378830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:54.617 [2024-07-15 13:50:49.378842] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:54.617 [2024-07-15 13:50:49.378850] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:54.617 [2024-07-15 13:50:49.378858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:54.617 [2024-07-15 13:50:49.378869] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:54.617 [2024-07-15 13:50:49.378877] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:54.617 [2024-07-15 13:50:49.378886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:54.617 [2024-07-15 13:50:49.378897] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:54.617 [2024-07-15 13:50:49.378905] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:54.617 [2024-07-15 13:50:49.378914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:54.617 [2024-07-15 13:50:49.386763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:54.617 [2024-07-15 13:50:49.386791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:54.617 [2024-07-15 13:50:49.386808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:54.617 [2024-07-15 13:50:49.386824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:54.617 ===================================================== 00:11:54.617 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:54.617 ===================================================== 00:11:54.617 Controller Capabilities/Features 00:11:54.617 ================================ 00:11:54.617 Vendor ID: 4e58 00:11:54.617 Subsystem Vendor ID: 4e58 00:11:54.617 Serial Number: SPDK2 00:11:54.617 Model Number: SPDK bdev Controller 00:11:54.617 Firmware Version: 24.09 00:11:54.617 Recommended Arb Burst: 6 00:11:54.617 IEEE OUI Identifier: 8d 6b 50 00:11:54.617 Multi-path I/O 00:11:54.617 May have multiple subsystem ports: Yes 00:11:54.617 May have multiple controllers: Yes 00:11:54.617 Associated with SR-IOV VF: No 00:11:54.617 Max Data Transfer Size: 131072 00:11:54.617 Max Number of Namespaces: 32 00:11:54.617 Max Number of I/O Queues: 127 00:11:54.617 NVMe Specification Version (VS): 1.3 00:11:54.617 NVMe Specification Version (Identify): 1.3 00:11:54.617 Maximum Queue Entries: 256 00:11:54.617 Contiguous Queues Required: Yes 00:11:54.617 Arbitration Mechanisms Supported 00:11:54.617 Weighted Round Robin: Not Supported 00:11:54.617 Vendor Specific: Not Supported 00:11:54.617 Reset Timeout: 15000 ms 00:11:54.617 Doorbell Stride: 4 bytes 00:11:54.617 NVM Subsystem Reset: Not Supported 00:11:54.617 Command Sets Supported 00:11:54.617 NVM Command Set: Supported 00:11:54.617 Boot Partition: Not Supported 00:11:54.617 Memory Page Size Minimum: 4096 bytes 00:11:54.617 Memory Page Size Maximum: 4096 bytes 00:11:54.617 Persistent Memory Region: Not Supported 00:11:54.617 Optional Asynchronous Events Supported 00:11:54.617 Namespace Attribute Notices: Supported 00:11:54.617 Firmware Activation Notices: Not Supported 00:11:54.617 ANA Change Notices: Not Supported 00:11:54.617 PLE Aggregate Log Change Notices: Not Supported 00:11:54.617 LBA Status Info Alert Notices: Not Supported 00:11:54.617 EGE Aggregate Log Change Notices: Not Supported 00:11:54.617 Normal NVM Subsystem Shutdown event: Not Supported 00:11:54.617 Zone Descriptor Change Notices: Not Supported 00:11:54.617 Discovery Log Change Notices: Not Supported 00:11:54.617 Controller Attributes 00:11:54.617 128-bit Host Identifier: Supported 00:11:54.617 Non-Operational Permissive Mode: Not Supported 00:11:54.617 NVM Sets: Not Supported 00:11:54.617 Read Recovery Levels: Not Supported 00:11:54.617 Endurance Groups: Not Supported 00:11:54.617 Predictable Latency Mode: Not Supported 00:11:54.617 Traffic Based Keep ALive: Not Supported 00:11:54.617 Namespace Granularity: Not Supported 00:11:54.617 SQ Associations: Not Supported 00:11:54.617 UUID List: Not Supported 00:11:54.617 Multi-Domain Subsystem: Not Supported 00:11:54.617 Fixed Capacity Management: Not Supported 00:11:54.617 Variable Capacity Management: Not Supported 00:11:54.617 Delete Endurance Group: Not Supported 00:11:54.617 Delete NVM Set: Not Supported 00:11:54.617 Extended LBA Formats Supported: Not Supported 00:11:54.617 Flexible Data Placement Supported: Not Supported 00:11:54.617 00:11:54.617 Controller Memory Buffer Support 00:11:54.617 ================================ 00:11:54.617 Supported: No 00:11:54.617 00:11:54.617 Persistent Memory Region Support 00:11:54.617 ================================ 00:11:54.617 Supported: No 00:11:54.617 00:11:54.617 Admin Command Set Attributes 00:11:54.617 ============================ 00:11:54.617 Security Send/Receive: Not Supported 00:11:54.617 Format NVM: Not Supported 00:11:54.617 Firmware Activate/Download: Not Supported 00:11:54.617 Namespace Management: Not Supported 00:11:54.617 Device Self-Test: Not Supported 00:11:54.617 Directives: Not Supported 00:11:54.617 NVMe-MI: Not Supported 00:11:54.617 Virtualization Management: Not Supported 00:11:54.617 Doorbell Buffer Config: Not Supported 00:11:54.617 Get LBA Status Capability: Not Supported 00:11:54.617 Command & Feature Lockdown Capability: Not Supported 00:11:54.617 Abort Command Limit: 4 00:11:54.617 Async Event Request Limit: 4 00:11:54.617 Number of Firmware Slots: N/A 00:11:54.617 Firmware Slot 1 Read-Only: N/A 00:11:54.617 Firmware Activation Without Reset: N/A 00:11:54.617 Multiple Update Detection Support: N/A 00:11:54.617 Firmware Update Granularity: No Information Provided 00:11:54.617 Per-Namespace SMART Log: No 00:11:54.617 Asymmetric Namespace Access Log Page: Not Supported 00:11:54.617 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:11:54.617 Command Effects Log Page: Supported 00:11:54.617 Get Log Page Extended Data: Supported 00:11:54.617 Telemetry Log Pages: Not Supported 00:11:54.617 Persistent Event Log Pages: Not Supported 00:11:54.617 Supported Log Pages Log Page: May Support 00:11:54.617 Commands Supported & Effects Log Page: Not Supported 00:11:54.617 Feature Identifiers & Effects Log Page:May Support 00:11:54.617 NVMe-MI Commands & Effects Log Page: May Support 00:11:54.617 Data Area 4 for Telemetry Log: Not Supported 00:11:54.617 Error Log Page Entries Supported: 128 00:11:54.617 Keep Alive: Supported 00:11:54.617 Keep Alive Granularity: 10000 ms 00:11:54.617 00:11:54.617 NVM Command Set Attributes 00:11:54.617 ========================== 00:11:54.617 Submission Queue Entry Size 00:11:54.617 Max: 64 00:11:54.617 Min: 64 00:11:54.617 Completion Queue Entry Size 00:11:54.617 Max: 16 00:11:54.617 Min: 16 00:11:54.617 Number of Namespaces: 32 00:11:54.617 Compare Command: Supported 00:11:54.617 Write Uncorrectable Command: Not Supported 00:11:54.617 Dataset Management Command: Supported 00:11:54.617 Write Zeroes Command: Supported 00:11:54.617 Set Features Save Field: Not Supported 00:11:54.617 Reservations: Not Supported 00:11:54.617 Timestamp: Not Supported 00:11:54.617 Copy: Supported 00:11:54.617 Volatile Write Cache: Present 00:11:54.617 Atomic Write Unit (Normal): 1 00:11:54.617 Atomic Write Unit (PFail): 1 00:11:54.617 Atomic Compare & Write Unit: 1 00:11:54.617 Fused Compare & Write: Supported 00:11:54.617 Scatter-Gather List 00:11:54.617 SGL Command Set: Supported (Dword aligned) 00:11:54.617 SGL Keyed: Not Supported 00:11:54.617 SGL Bit Bucket Descriptor: Not Supported 00:11:54.617 SGL Metadata Pointer: Not Supported 00:11:54.617 Oversized SGL: Not Supported 00:11:54.617 SGL Metadata Address: Not Supported 00:11:54.617 SGL Offset: Not Supported 00:11:54.617 Transport SGL Data Block: Not Supported 00:11:54.617 Replay Protected Memory Block: Not Supported 00:11:54.617 00:11:54.617 Firmware Slot Information 00:11:54.617 ========================= 00:11:54.617 Active slot: 1 00:11:54.617 Slot 1 Firmware Revision: 24.09 00:11:54.617 00:11:54.617 00:11:54.617 Commands Supported and Effects 00:11:54.617 ============================== 00:11:54.617 Admin Commands 00:11:54.617 -------------- 00:11:54.617 Get Log Page (02h): Supported 00:11:54.617 Identify (06h): Supported 00:11:54.617 Abort (08h): Supported 00:11:54.617 Set Features (09h): Supported 00:11:54.617 Get Features (0Ah): Supported 00:11:54.617 Asynchronous Event Request (0Ch): Supported 00:11:54.617 Keep Alive (18h): Supported 00:11:54.617 I/O Commands 00:11:54.617 ------------ 00:11:54.617 Flush (00h): Supported LBA-Change 00:11:54.617 Write (01h): Supported LBA-Change 00:11:54.617 Read (02h): Supported 00:11:54.617 Compare (05h): Supported 00:11:54.617 Write Zeroes (08h): Supported LBA-Change 00:11:54.617 Dataset Management (09h): Supported LBA-Change 00:11:54.618 Copy (19h): Supported LBA-Change 00:11:54.618 00:11:54.618 Error Log 00:11:54.618 ========= 00:11:54.618 00:11:54.618 Arbitration 00:11:54.618 =========== 00:11:54.618 Arbitration Burst: 1 00:11:54.618 00:11:54.618 Power Management 00:11:54.618 ================ 00:11:54.618 Number of Power States: 1 00:11:54.618 Current Power State: Power State #0 00:11:54.618 Power State #0: 00:11:54.618 Max Power: 0.00 W 00:11:54.618 Non-Operational State: Operational 00:11:54.618 Entry Latency: Not Reported 00:11:54.618 Exit Latency: Not Reported 00:11:54.618 Relative Read Throughput: 0 00:11:54.618 Relative Read Latency: 0 00:11:54.618 Relative Write Throughput: 0 00:11:54.618 Relative Write Latency: 0 00:11:54.618 Idle Power: Not Reported 00:11:54.618 Active Power: Not Reported 00:11:54.618 Non-Operational Permissive Mode: Not Supported 00:11:54.618 00:11:54.618 Health Information 00:11:54.618 ================== 00:11:54.618 Critical Warnings: 00:11:54.618 Available Spare Space: OK 00:11:54.618 Temperature: OK 00:11:54.618 Device Reliability: OK 00:11:54.618 Read Only: No 00:11:54.618 Volatile Memory Backup: OK 00:11:54.618 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:54.618 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:54.618 Available Spare: 0% 00:11:54.618 Available Sp[2024-07-15 13:50:49.386937] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:54.618 [2024-07-15 13:50:49.394764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:54.618 [2024-07-15 13:50:49.394814] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:11:54.618 [2024-07-15 13:50:49.394831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.618 [2024-07-15 13:50:49.394842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.618 [2024-07-15 13:50:49.394853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.618 [2024-07-15 13:50:49.394863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.618 [2024-07-15 13:50:49.394925] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:54.618 [2024-07-15 13:50:49.394946] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:11:54.618 [2024-07-15 13:50:49.395921] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:54.618 [2024-07-15 13:50:49.395991] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:11:54.618 [2024-07-15 13:50:49.396006] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:11:54.618 [2024-07-15 13:50:49.396939] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:11:54.618 [2024-07-15 13:50:49.396964] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:11:54.618 [2024-07-15 13:50:49.397017] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:11:54.618 [2024-07-15 13:50:49.399750] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:54.618 are Threshold: 0% 00:11:54.618 Life Percentage Used: 0% 00:11:54.618 Data Units Read: 0 00:11:54.618 Data Units Written: 0 00:11:54.618 Host Read Commands: 0 00:11:54.618 Host Write Commands: 0 00:11:54.618 Controller Busy Time: 0 minutes 00:11:54.618 Power Cycles: 0 00:11:54.618 Power On Hours: 0 hours 00:11:54.618 Unsafe Shutdowns: 0 00:11:54.618 Unrecoverable Media Errors: 0 00:11:54.618 Lifetime Error Log Entries: 0 00:11:54.618 Warning Temperature Time: 0 minutes 00:11:54.618 Critical Temperature Time: 0 minutes 00:11:54.618 00:11:54.618 Number of Queues 00:11:54.618 ================ 00:11:54.618 Number of I/O Submission Queues: 127 00:11:54.618 Number of I/O Completion Queues: 127 00:11:54.618 00:11:54.618 Active Namespaces 00:11:54.618 ================= 00:11:54.618 Namespace ID:1 00:11:54.618 Error Recovery Timeout: Unlimited 00:11:54.618 Command Set Identifier: NVM (00h) 00:11:54.618 Deallocate: Supported 00:11:54.618 Deallocated/Unwritten Error: Not Supported 00:11:54.618 Deallocated Read Value: Unknown 00:11:54.618 Deallocate in Write Zeroes: Not Supported 00:11:54.618 Deallocated Guard Field: 0xFFFF 00:11:54.618 Flush: Supported 00:11:54.618 Reservation: Supported 00:11:54.618 Namespace Sharing Capabilities: Multiple Controllers 00:11:54.618 Size (in LBAs): 131072 (0GiB) 00:11:54.618 Capacity (in LBAs): 131072 (0GiB) 00:11:54.618 Utilization (in LBAs): 131072 (0GiB) 00:11:54.618 NGUID: A33F985EDADF4BFE8C09E8756CFBE88E 00:11:54.618 UUID: a33f985e-dadf-4bfe-8c09-e8756cfbe88e 00:11:54.618 Thin Provisioning: Not Supported 00:11:54.618 Per-NS Atomic Units: Yes 00:11:54.618 Atomic Boundary Size (Normal): 0 00:11:54.618 Atomic Boundary Size (PFail): 0 00:11:54.618 Atomic Boundary Offset: 0 00:11:54.618 Maximum Single Source Range Length: 65535 00:11:54.618 Maximum Copy Length: 65535 00:11:54.618 Maximum Source Range Count: 1 00:11:54.618 NGUID/EUI64 Never Reused: No 00:11:54.618 Namespace Write Protected: No 00:11:54.618 Number of LBA Formats: 1 00:11:54.618 Current LBA Format: LBA Format #00 00:11:54.618 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:54.618 00:11:54.618 13:50:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:54.876 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.876 [2024-07-15 13:50:49.628535] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:00.159 Initializing NVMe Controllers 00:12:00.159 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:00.159 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:00.159 Initialization complete. Launching workers. 00:12:00.159 ======================================================== 00:12:00.159 Latency(us) 00:12:00.159 Device Information : IOPS MiB/s Average min max 00:12:00.159 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34654.27 135.37 3692.95 1162.64 7385.70 00:12:00.159 ======================================================== 00:12:00.159 Total : 34654.27 135.37 3692.95 1162.64 7385.70 00:12:00.159 00:12:00.159 [2024-07-15 13:50:54.734132] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:00.159 13:50:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:00.159 EAL: No free 2048 kB hugepages reported on node 1 00:12:00.159 [2024-07-15 13:50:54.975804] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:05.432 Initializing NVMe Controllers 00:12:05.432 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:05.432 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:05.432 Initialization complete. Launching workers. 00:12:05.432 ======================================================== 00:12:05.432 Latency(us) 00:12:05.432 Device Information : IOPS MiB/s Average min max 00:12:05.432 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32009.69 125.04 3998.12 1217.03 8293.24 00:12:05.432 ======================================================== 00:12:05.432 Total : 32009.69 125.04 3998.12 1217.03 8293.24 00:12:05.432 00:12:05.432 [2024-07-15 13:50:59.995676] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:05.432 13:51:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:05.432 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.432 [2024-07-15 13:51:00.212732] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:10.794 [2024-07-15 13:51:05.362887] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:10.794 Initializing NVMe Controllers 00:12:10.794 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:10.794 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:10.794 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:10.795 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:10.795 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:10.795 Initialization complete. Launching workers. 00:12:10.795 Starting thread on core 2 00:12:10.795 Starting thread on core 3 00:12:10.795 Starting thread on core 1 00:12:10.795 13:51:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:10.795 EAL: No free 2048 kB hugepages reported on node 1 00:12:11.054 [2024-07-15 13:51:05.677261] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:14.339 [2024-07-15 13:51:08.754110] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:14.339 Initializing NVMe Controllers 00:12:14.339 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:14.339 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:14.339 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:14.339 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:14.339 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:14.339 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:14.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:14.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:14.339 Initialization complete. Launching workers. 00:12:14.339 Starting thread on core 1 with urgent priority queue 00:12:14.339 Starting thread on core 2 with urgent priority queue 00:12:14.339 Starting thread on core 3 with urgent priority queue 00:12:14.339 Starting thread on core 0 with urgent priority queue 00:12:14.339 SPDK bdev Controller (SPDK2 ) core 0: 5201.67 IO/s 19.22 secs/100000 ios 00:12:14.339 SPDK bdev Controller (SPDK2 ) core 1: 5157.33 IO/s 19.39 secs/100000 ios 00:12:14.339 SPDK bdev Controller (SPDK2 ) core 2: 5039.33 IO/s 19.84 secs/100000 ios 00:12:14.339 SPDK bdev Controller (SPDK2 ) core 3: 5400.33 IO/s 18.52 secs/100000 ios 00:12:14.339 ======================================================== 00:12:14.339 00:12:14.339 13:51:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:14.339 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.339 [2024-07-15 13:51:09.057259] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:14.339 Initializing NVMe Controllers 00:12:14.339 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:14.339 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:14.339 Namespace ID: 1 size: 0GB 00:12:14.339 Initialization complete. 00:12:14.339 INFO: using host memory buffer for IO 00:12:14.339 Hello world! 00:12:14.339 [2024-07-15 13:51:09.067459] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:14.339 13:51:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:14.339 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.596 [2024-07-15 13:51:09.360067] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:15.972 Initializing NVMe Controllers 00:12:15.972 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:15.972 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:15.972 Initialization complete. Launching workers. 00:12:15.972 submit (in ns) avg, min, max = 7113.5, 3497.8, 4024752.2 00:12:15.972 complete (in ns) avg, min, max = 25797.8, 2056.7, 4024242.2 00:12:15.972 00:12:15.972 Submit histogram 00:12:15.972 ================ 00:12:15.972 Range in us Cumulative Count 00:12:15.972 3.484 - 3.508: 0.0542% ( 7) 00:12:15.972 3.508 - 3.532: 0.5886% ( 69) 00:12:15.972 3.532 - 3.556: 1.8511% ( 163) 00:12:15.972 3.556 - 3.579: 4.5775% ( 352) 00:12:15.972 3.579 - 3.603: 9.8443% ( 680) 00:12:15.972 3.603 - 3.627: 17.3108% ( 964) 00:12:15.972 3.627 - 3.650: 25.8849% ( 1107) 00:12:15.972 3.650 - 3.674: 33.4211% ( 973) 00:12:15.972 3.674 - 3.698: 40.9341% ( 970) 00:12:15.972 3.698 - 3.721: 48.6562% ( 997) 00:12:15.972 3.721 - 3.745: 53.5745% ( 635) 00:12:15.972 3.745 - 3.769: 57.3929% ( 493) 00:12:15.972 3.769 - 3.793: 60.4368% ( 393) 00:12:15.972 3.793 - 3.816: 64.0694% ( 469) 00:12:15.972 3.816 - 3.840: 67.5393% ( 448) 00:12:15.972 3.840 - 3.864: 71.6134% ( 526) 00:12:15.972 3.864 - 3.887: 75.4550% ( 496) 00:12:15.972 3.887 - 3.911: 79.1186% ( 473) 00:12:15.972 3.911 - 3.935: 82.5962% ( 449) 00:12:15.972 3.935 - 3.959: 84.7185% ( 274) 00:12:15.972 3.959 - 3.982: 86.4689% ( 226) 00:12:15.972 3.982 - 4.006: 87.8321% ( 176) 00:12:15.972 4.006 - 4.030: 89.1488% ( 170) 00:12:15.972 4.030 - 4.053: 90.2564% ( 143) 00:12:15.972 4.053 - 4.077: 91.3175% ( 137) 00:12:15.972 4.077 - 4.101: 92.0533% ( 95) 00:12:15.972 4.101 - 4.124: 92.8511% ( 103) 00:12:15.972 4.124 - 4.148: 93.3777% ( 68) 00:12:15.972 4.148 - 4.172: 93.7418% ( 47) 00:12:15.972 4.172 - 4.196: 94.0206% ( 36) 00:12:15.972 4.196 - 4.219: 94.2607% ( 31) 00:12:15.972 4.219 - 4.243: 94.4698% ( 27) 00:12:15.972 4.243 - 4.267: 94.6247% ( 20) 00:12:15.972 4.267 - 4.290: 94.8106% ( 24) 00:12:15.972 4.290 - 4.314: 94.9888% ( 23) 00:12:15.972 4.314 - 4.338: 95.0817% ( 12) 00:12:15.972 4.338 - 4.361: 95.2289% ( 19) 00:12:15.972 4.361 - 4.385: 95.3063% ( 10) 00:12:15.972 4.385 - 4.409: 95.3605% ( 7) 00:12:15.972 4.409 - 4.433: 95.4070% ( 6) 00:12:15.972 4.433 - 4.456: 95.4612% ( 7) 00:12:15.972 4.456 - 4.480: 95.5077% ( 6) 00:12:15.972 4.480 - 4.504: 95.5542% ( 6) 00:12:15.972 4.504 - 4.527: 95.5929% ( 5) 00:12:15.972 4.527 - 4.551: 95.6316% ( 5) 00:12:15.972 4.551 - 4.575: 95.6471% ( 2) 00:12:15.972 4.575 - 4.599: 95.6781% ( 4) 00:12:15.972 4.599 - 4.622: 95.7168% ( 5) 00:12:15.972 4.622 - 4.646: 95.7246% ( 1) 00:12:15.972 4.646 - 4.670: 95.7556% ( 4) 00:12:15.972 4.670 - 4.693: 95.7788% ( 3) 00:12:15.972 4.693 - 4.717: 95.8175% ( 5) 00:12:15.972 4.717 - 4.741: 95.8640% ( 6) 00:12:15.972 4.741 - 4.764: 95.9027% ( 5) 00:12:15.972 4.764 - 4.788: 95.9802% ( 10) 00:12:15.972 4.788 - 4.812: 96.0576% ( 10) 00:12:15.972 4.812 - 4.836: 96.0964% ( 5) 00:12:15.972 4.836 - 4.859: 96.1893% ( 12) 00:12:15.972 4.859 - 4.883: 96.2435% ( 7) 00:12:15.972 4.883 - 4.907: 96.3365% ( 12) 00:12:15.972 4.907 - 4.930: 96.4371% ( 13) 00:12:15.972 4.930 - 4.954: 96.5456% ( 14) 00:12:15.972 4.954 - 4.978: 96.6695% ( 16) 00:12:15.972 4.978 - 5.001: 96.7857% ( 15) 00:12:15.972 5.001 - 5.025: 96.8941% ( 14) 00:12:15.972 5.025 - 5.049: 97.0335% ( 18) 00:12:15.972 5.049 - 5.073: 97.1265% ( 12) 00:12:15.972 5.073 - 5.096: 97.2117% ( 11) 00:12:15.972 5.096 - 5.120: 97.2736% ( 8) 00:12:15.972 5.120 - 5.144: 97.3666% ( 12) 00:12:15.972 5.144 - 5.167: 97.3898% ( 3) 00:12:15.972 5.167 - 5.191: 97.4595% ( 9) 00:12:15.972 5.191 - 5.215: 97.4983% ( 5) 00:12:15.972 5.215 - 5.239: 97.5525% ( 7) 00:12:15.972 5.239 - 5.262: 97.5835% ( 4) 00:12:15.972 5.262 - 5.286: 97.6377% ( 7) 00:12:15.972 5.286 - 5.310: 97.6454% ( 1) 00:12:15.972 5.310 - 5.333: 97.6687% ( 3) 00:12:15.972 5.333 - 5.357: 97.6919% ( 3) 00:12:15.972 5.357 - 5.381: 97.7151% ( 3) 00:12:15.972 5.381 - 5.404: 97.7384% ( 3) 00:12:15.972 5.404 - 5.428: 97.7693% ( 4) 00:12:15.972 5.452 - 5.476: 97.7771% ( 1) 00:12:15.972 5.476 - 5.499: 97.8468% ( 9) 00:12:15.972 5.499 - 5.523: 97.8545% ( 1) 00:12:15.972 5.547 - 5.570: 97.8623% ( 1) 00:12:15.972 5.570 - 5.594: 97.8700% ( 1) 00:12:15.972 5.594 - 5.618: 97.8855% ( 2) 00:12:15.972 5.641 - 5.665: 97.8933% ( 1) 00:12:15.972 5.689 - 5.713: 97.9010% ( 1) 00:12:15.972 5.736 - 5.760: 97.9088% ( 1) 00:12:15.972 5.760 - 5.784: 97.9243% ( 2) 00:12:15.972 5.784 - 5.807: 97.9320% ( 1) 00:12:15.972 5.807 - 5.831: 97.9475% ( 2) 00:12:15.972 5.879 - 5.902: 97.9552% ( 1) 00:12:15.972 5.902 - 5.926: 97.9630% ( 1) 00:12:15.972 6.044 - 6.068: 97.9707% ( 1) 00:12:15.972 6.400 - 6.447: 97.9940% ( 3) 00:12:15.972 6.447 - 6.495: 98.0017% ( 1) 00:12:15.972 6.495 - 6.542: 98.0249% ( 3) 00:12:15.972 6.542 - 6.590: 98.0404% ( 2) 00:12:15.972 6.590 - 6.637: 98.0482% ( 1) 00:12:15.972 6.732 - 6.779: 98.0559% ( 1) 00:12:15.972 6.921 - 6.969: 98.0714% ( 2) 00:12:15.972 7.064 - 7.111: 98.0792% ( 1) 00:12:15.972 7.111 - 7.159: 98.0869% ( 1) 00:12:15.972 7.253 - 7.301: 98.0946% ( 1) 00:12:15.972 7.396 - 7.443: 98.1024% ( 1) 00:12:15.972 7.490 - 7.538: 98.1101% ( 1) 00:12:15.972 7.538 - 7.585: 98.1179% ( 1) 00:12:15.972 7.585 - 7.633: 98.1256% ( 1) 00:12:15.972 7.633 - 7.680: 98.1334% ( 1) 00:12:15.972 7.917 - 7.964: 98.1489% ( 2) 00:12:15.972 7.964 - 8.012: 98.1566% ( 1) 00:12:15.972 8.012 - 8.059: 98.1644% ( 1) 00:12:15.972 8.059 - 8.107: 98.1721% ( 1) 00:12:15.972 8.107 - 8.154: 98.1798% ( 1) 00:12:15.972 8.249 - 8.296: 98.1876% ( 1) 00:12:15.972 8.296 - 8.344: 98.2108% ( 3) 00:12:15.972 8.344 - 8.391: 98.2341% ( 3) 00:12:15.972 8.391 - 8.439: 98.2496% ( 2) 00:12:15.972 8.533 - 8.581: 98.2650% ( 2) 00:12:15.972 8.581 - 8.628: 98.2728% ( 1) 00:12:15.972 8.676 - 8.723: 98.2805% ( 1) 00:12:15.972 8.723 - 8.770: 98.2883% ( 1) 00:12:15.972 8.770 - 8.818: 98.2960% ( 1) 00:12:15.972 8.818 - 8.865: 98.3038% ( 1) 00:12:15.972 8.865 - 8.913: 98.3115% ( 1) 00:12:15.972 8.913 - 8.960: 98.3193% ( 1) 00:12:15.972 9.007 - 9.055: 98.3270% ( 1) 00:12:15.972 9.102 - 9.150: 98.3580% ( 4) 00:12:15.972 9.244 - 9.292: 98.3657% ( 1) 00:12:15.972 9.292 - 9.339: 98.3735% ( 1) 00:12:15.972 9.339 - 9.387: 98.3812% ( 1) 00:12:15.972 9.387 - 9.434: 98.3967% ( 2) 00:12:15.973 9.529 - 9.576: 98.4045% ( 1) 00:12:15.973 9.671 - 9.719: 98.4122% ( 1) 00:12:15.973 9.719 - 9.766: 98.4200% ( 1) 00:12:15.973 9.766 - 9.813: 98.4277% ( 1) 00:12:15.973 9.908 - 9.956: 98.4354% ( 1) 00:12:15.973 9.956 - 10.003: 98.4432% ( 1) 00:12:15.973 10.003 - 10.050: 98.4509% ( 1) 00:12:15.973 10.098 - 10.145: 98.4587% ( 1) 00:12:15.973 10.145 - 10.193: 98.4819% ( 3) 00:12:15.973 10.240 - 10.287: 98.4897% ( 1) 00:12:15.973 10.287 - 10.335: 98.4974% ( 1) 00:12:15.973 10.335 - 10.382: 98.5052% ( 1) 00:12:15.973 10.382 - 10.430: 98.5129% ( 1) 00:12:15.973 10.477 - 10.524: 98.5206% ( 1) 00:12:15.973 10.667 - 10.714: 98.5284% ( 1) 00:12:15.973 10.761 - 10.809: 98.5361% ( 1) 00:12:15.973 10.809 - 10.856: 98.5594% ( 3) 00:12:15.973 10.856 - 10.904: 98.5671% ( 1) 00:12:15.973 10.951 - 10.999: 98.5749% ( 1) 00:12:15.973 11.046 - 11.093: 98.5826% ( 1) 00:12:15.973 11.330 - 11.378: 98.5903% ( 1) 00:12:15.973 11.473 - 11.520: 98.5981% ( 1) 00:12:15.973 11.520 - 11.567: 98.6058% ( 1) 00:12:15.973 11.662 - 11.710: 98.6136% ( 1) 00:12:15.973 11.947 - 11.994: 98.6213% ( 1) 00:12:15.973 12.041 - 12.089: 98.6291% ( 1) 00:12:15.973 12.089 - 12.136: 98.6368% ( 1) 00:12:15.973 12.231 - 12.326: 98.6446% ( 1) 00:12:15.973 12.326 - 12.421: 98.6601% ( 2) 00:12:15.973 12.421 - 12.516: 98.6678% ( 1) 00:12:15.973 12.516 - 12.610: 98.6910% ( 3) 00:12:15.973 12.610 - 12.705: 98.6988% ( 1) 00:12:15.973 12.895 - 12.990: 98.7143% ( 2) 00:12:15.973 12.990 - 13.084: 98.7220% ( 1) 00:12:15.973 13.179 - 13.274: 98.7298% ( 1) 00:12:15.973 13.369 - 13.464: 98.7453% ( 2) 00:12:15.973 13.464 - 13.559: 98.7530% ( 1) 00:12:15.973 13.559 - 13.653: 98.7607% ( 1) 00:12:15.973 13.748 - 13.843: 98.7840% ( 3) 00:12:15.973 13.843 - 13.938: 98.7995% ( 2) 00:12:15.973 13.938 - 14.033: 98.8072% ( 1) 00:12:15.973 14.222 - 14.317: 98.8150% ( 1) 00:12:15.973 14.317 - 14.412: 98.8227% ( 1) 00:12:15.973 14.412 - 14.507: 98.8305% ( 1) 00:12:15.973 14.696 - 14.791: 98.8382% ( 1) 00:12:15.973 14.981 - 15.076: 98.8459% ( 1) 00:12:15.973 15.076 - 15.170: 98.8537% ( 1) 00:12:15.973 15.170 - 15.265: 98.8692% ( 2) 00:12:15.973 15.455 - 15.550: 98.8769% ( 1) 00:12:15.973 16.308 - 16.403: 98.8847% ( 1) 00:12:15.973 17.161 - 17.256: 98.8924% ( 1) 00:12:15.973 17.256 - 17.351: 98.9079% ( 2) 00:12:15.973 17.351 - 17.446: 98.9234% ( 2) 00:12:15.973 17.446 - 17.541: 98.9389% ( 2) 00:12:15.973 17.541 - 17.636: 98.9466% ( 1) 00:12:15.973 17.636 - 17.730: 99.0318% ( 11) 00:12:15.973 17.730 - 17.825: 99.0938% ( 8) 00:12:15.973 17.825 - 17.920: 99.1248% ( 4) 00:12:15.973 17.920 - 18.015: 99.1403% ( 2) 00:12:15.973 18.015 - 18.110: 99.1790% ( 5) 00:12:15.973 18.110 - 18.204: 99.2564% ( 10) 00:12:15.973 18.204 - 18.299: 99.3649% ( 14) 00:12:15.973 18.299 - 18.394: 99.4114% ( 6) 00:12:15.973 18.394 - 18.489: 99.4966% ( 11) 00:12:15.973 18.489 - 18.584: 99.5585% ( 8) 00:12:15.973 18.584 - 18.679: 99.5972% ( 5) 00:12:15.973 18.679 - 18.773: 99.6592% ( 8) 00:12:15.973 18.773 - 18.868: 99.6670% ( 1) 00:12:15.973 18.868 - 18.963: 99.6902% ( 3) 00:12:15.973 18.963 - 19.058: 99.7289% ( 5) 00:12:15.973 19.058 - 19.153: 99.7367% ( 1) 00:12:15.973 19.153 - 19.247: 99.7521% ( 2) 00:12:15.973 19.247 - 19.342: 99.7676% ( 2) 00:12:15.973 19.342 - 19.437: 99.7831% ( 2) 00:12:15.973 19.437 - 19.532: 99.7909% ( 1) 00:12:15.973 19.627 - 19.721: 99.7986% ( 1) 00:12:15.973 19.911 - 20.006: 99.8064% ( 1) 00:12:15.973 20.290 - 20.385: 99.8141% ( 1) 00:12:15.973 20.385 - 20.480: 99.8219% ( 1) 00:12:15.973 21.049 - 21.144: 99.8296% ( 1) 00:12:15.973 23.040 - 23.135: 99.8373% ( 1) 00:12:15.973 23.988 - 24.083: 99.8451% ( 1) 00:12:15.973 24.083 - 24.178: 99.8528% ( 1) 00:12:15.973 25.031 - 25.221: 99.8606% ( 1) 00:12:15.973 25.790 - 25.979: 99.8761% ( 2) 00:12:15.973 26.738 - 26.927: 99.8838% ( 1) 00:12:15.973 26.927 - 27.117: 99.8916% ( 1) 00:12:15.973 27.117 - 27.307: 99.8993% ( 1) 00:12:15.973 28.065 - 28.255: 99.9071% ( 1) 00:12:15.973 29.013 - 29.203: 99.9148% ( 1) 00:12:15.973 30.530 - 30.720: 99.9225% ( 1) 00:12:15.973 3980.705 - 4004.978: 99.9768% ( 7) 00:12:15.973 4004.978 - 4029.250: 100.0000% ( 3) 00:12:15.973 00:12:15.973 Complete histogram 00:12:15.973 ================== 00:12:15.973 Range in us Cumulative Count 00:12:15.973 2.050 - 2.062: 0.4337% ( 56) 00:12:15.973 2.062 - 2.074: 27.8445% ( 3539) 00:12:15.973 2.074 - 2.086: 37.5184% ( 1249) 00:12:15.973 2.086 - 2.098: 41.1742% ( 472) 00:12:15.973 2.098 - 2.110: 55.3946% ( 1836) 00:12:15.973 2.110 - 2.121: 58.7948% ( 439) 00:12:15.973 2.121 - 2.133: 62.0788% ( 424) 00:12:15.973 2.133 - 2.145: 70.1495% ( 1042) 00:12:15.973 2.145 - 2.157: 71.9077% ( 227) 00:12:15.973 2.157 - 2.169: 74.4946% ( 334) 00:12:15.973 2.169 - 2.181: 78.3440% ( 497) 00:12:15.973 2.181 - 2.193: 79.2812% ( 121) 00:12:15.973 2.193 - 2.204: 80.5747% ( 167) 00:12:15.973 2.204 - 2.216: 84.6720% ( 529) 00:12:15.973 2.216 - 2.228: 86.6780% ( 259) 00:12:15.973 2.228 - 2.240: 88.7925% ( 273) 00:12:15.973 2.240 - 2.252: 91.1703% ( 307) 00:12:15.973 2.252 - 2.264: 91.8829% ( 92) 00:12:15.973 2.264 - 2.276: 92.2314% ( 45) 00:12:15.973 2.276 - 2.287: 92.5335% ( 39) 00:12:15.973 2.287 - 2.299: 93.0679% ( 69) 00:12:15.973 2.299 - 2.311: 93.6333% ( 73) 00:12:15.973 2.311 - 2.323: 93.8425% ( 27) 00:12:15.973 2.323 - 2.335: 93.9044% ( 8) 00:12:15.973 2.335 - 2.347: 93.9974% ( 12) 00:12:15.973 2.347 - 2.359: 94.0903% ( 12) 00:12:15.973 2.359 - 2.370: 94.3846% ( 38) 00:12:15.973 2.370 - 2.382: 94.7332% ( 45) 00:12:15.973 2.382 - 2.394: 95.1747% ( 57) 00:12:15.973 2.394 - 2.406: 95.4225% ( 32) 00:12:15.973 2.406 - 2.418: 95.5697% ( 19) 00:12:15.973 2.418 - 2.430: 95.7013% ( 17) 00:12:15.973 2.430 - 2.441: 95.9105% ( 27) 00:12:15.973 2.441 - 2.453: 96.0344% ( 16) 00:12:15.973 2.453 - 2.465: 96.2203% ( 24) 00:12:15.973 2.465 - 2.477: 96.3829% ( 21) 00:12:15.973 2.477 - 2.489: 96.5146% ( 17) 00:12:15.973 2.489 - 2.501: 96.5843% ( 9) 00:12:15.973 2.501 - 2.513: 96.6540% ( 9) 00:12:15.973 2.513 - 2.524: 96.7237% ( 9) 00:12:15.973 2.524 - 2.536: 96.7625% ( 5) 00:12:15.973 2.536 - 2.548: 96.7857% ( 3) 00:12:15.973 2.548 - 2.560: 96.8244% ( 5) 00:12:15.973 2.560 - 2.572: 96.8554% ( 4) 00:12:15.973 2.572 - 2.584: 96.8786% ( 3) 00:12:15.973 2.584 - 2.596: 96.8941% ( 2) 00:12:15.973 2.596 - 2.607: 96.9096% ( 2) 00:12:15.973 2.607 - 2.619: 96.9483% ( 5) 00:12:15.973 2.619 - 2.631: 96.9716% ( 3) 00:12:15.973 2.631 - 2.643: 96.9871% ( 2) 00:12:15.973 2.643 - 2.655: 97.0026% ( 2) 00:12:15.973 2.655 - 2.667: 97.0103% ( 1) 00:12:15.973 2.667 - 2.679: 97.0180% ( 1) 00:12:15.973 2.690 - 2.702: 97.0413% ( 3) 00:12:15.973 2.702 - 2.714: 97.0568% ( 2) 00:12:15.973 2.714 - 2.726: 97.1032% ( 6) 00:12:15.973 2.726 - 2.738: 97.1265% ( 3) 00:12:15.973 2.738 - 2.750: 97.1497% ( 3) 00:12:15.973 2.750 - 2.761: 97.1730% ( 3) 00:12:15.973 2.761 - 2.773: 97.2272% ( 7) 00:12:15.973 2.773 - 2.785: 97.2891% ( 8) 00:12:15.973 2.785 - 2.797: 97.3124% ( 3) 00:12:15.973 2.797 - 2.809: 97.3279% ( 2) 00:12:15.973 2.809 - 2.821: 97.3588% ( 4) 00:12:15.973 2.821 - 2.833: 97.4053% ( 6) 00:12:15.973 2.833 - 2.844: 97.4983% ( 12) 00:12:15.973 2.844 - 2.856: 97.5292% ( 4) 00:12:15.973 2.856 - 2.868: 97.6144% ( 11) 00:12:15.973 2.868 - 2.880: 97.6532% ( 5) 00:12:15.973 2.880 - 2.892: 97.6996% ( 6) 00:12:15.973 2.892 - 2.904: 97.7539% ( 7) 00:12:15.973 2.904 - 2.916: 97.8003% ( 6) 00:12:15.973 2.916 - 2.927: 97.8236% ( 3) 00:12:15.973 2.927 - 2.939: 97.8313% ( 1) 00:12:15.973 2.939 - 2.951: 97.8468% ( 2) 00:12:15.973 2.951 - 2.963: 97.8855% ( 5) 00:12:15.973 2.963 - 2.975: 97.9010% ( 2) 00:12:15.973 2.975 - 2.987: 97.9552% ( 7) 00:12:15.973 2.999 - 3.010: 97.9630% ( 1) 00:12:15.973 3.010 - 3.022: 98.0017% ( 5) 00:12:15.973 3.022 - 3.034: 98.0559% ( 7) 00:12:15.973 3.034 - 3.058: 98.0946% ( 5) 00:12:15.973 3.058 - 3.081: 98.1101% ( 2) 00:12:15.973 3.081 - 3.105: 98.1411% ( 4) 00:12:15.973 3.105 - 3.129: 98.1644% ( 3) 00:12:15.973 3.129 - 3.153: 98.2108% ( 6) 00:12:15.973 3.153 - 3.176: 98.2418% ( 4) 00:12:15.973 3.176 - 3.200: 98.2650% ( 3) 00:12:15.973 3.200 - 3.224: 98.2960% ( 4) 00:12:15.973 3.224 - 3.247: 98.3115% ( 2) 00:12:15.973 3.247 - 3.271: 98.3425% ( 4) 00:12:15.973 3.271 - 3.295: 98.3580% ( 2) 00:12:15.973 3.319 - 3.342: 98.3735% ( 2) 00:12:15.973 3.342 - 3.366: 98.3812% ( 1) 00:12:15.973 3.413 - 3.437: 98.3890% ( 1) 00:12:15.973 3.461 - 3.484: 98.3967% ( 1) 00:12:15.973 3.508 - 3.532: 98.4122% ( 2) 00:12:15.973 3.556 - 3.579: 98.4200% ( 1) 00:12:15.973 3.579 - 3.603: 98.4277% ( 1) 00:12:15.973 3.603 - 3.627: 98.4509% ( 3) 00:12:15.973 3.627 - 3.650: 98.4742% ( 3) 00:12:15.973 3.650 - 3.674: 98.4897% ( 2) 00:12:15.973 3.698 - 3.721: 98.4974% ( 1) 00:12:15.973 3.721 - 3.745: 98.5206% ( 3) 00:12:15.973 3.745 - 3.769: 98.5284% ( 1) 00:12:15.974 3.769 - 3.793: 98.5516% ( 3) 00:12:15.974 3.793 - 3.816: 98.5594% ( 1) 00:12:15.974 3.840 - 3.864: 98.5749% ( 2) 00:12:15.974 3.864 - 3.887: 98.5826% ( 1) 00:12:15.974 3.887 - 3.911: 98.5903% ( 1) 00:12:15.974 3.982 - 4.006: 98.6058%[2024-07-15 13:51:10.464700] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:15.974 ( 2) 00:12:15.974 4.101 - 4.124: 98.6213% ( 2) 00:12:15.974 4.196 - 4.219: 98.6291% ( 1) 00:12:15.974 4.219 - 4.243: 98.6368% ( 1) 00:12:15.974 4.361 - 4.385: 98.6446% ( 1) 00:12:15.974 4.551 - 4.575: 98.6523% ( 1) 00:12:15.974 5.476 - 5.499: 98.6601% ( 1) 00:12:15.974 5.499 - 5.523: 98.6678% ( 1) 00:12:15.974 5.807 - 5.831: 98.6755% ( 1) 00:12:15.974 5.997 - 6.021: 98.6833% ( 1) 00:12:15.974 6.021 - 6.044: 98.6910% ( 1) 00:12:15.974 6.116 - 6.163: 98.6988% ( 1) 00:12:15.974 6.305 - 6.353: 98.7143% ( 2) 00:12:15.974 6.400 - 6.447: 98.7220% ( 1) 00:12:15.974 6.447 - 6.495: 98.7298% ( 1) 00:12:15.974 6.495 - 6.542: 98.7375% ( 1) 00:12:15.974 6.542 - 6.590: 98.7453% ( 1) 00:12:15.974 6.684 - 6.732: 98.7607% ( 2) 00:12:15.974 6.827 - 6.874: 98.7685% ( 1) 00:12:15.974 7.159 - 7.206: 98.7762% ( 1) 00:12:15.974 7.253 - 7.301: 98.7840% ( 1) 00:12:15.974 7.538 - 7.585: 98.7917% ( 1) 00:12:15.974 7.633 - 7.680: 98.8072% ( 2) 00:12:15.974 7.870 - 7.917: 98.8150% ( 1) 00:12:15.974 8.249 - 8.296: 98.8227% ( 1) 00:12:15.974 8.486 - 8.533: 98.8382% ( 2) 00:12:15.974 8.818 - 8.865: 98.8459% ( 1) 00:12:15.974 9.766 - 9.813: 98.8537% ( 1) 00:12:15.974 13.843 - 13.938: 98.8614% ( 1) 00:12:15.974 15.360 - 15.455: 98.8692% ( 1) 00:12:15.974 15.550 - 15.644: 98.8769% ( 1) 00:12:15.974 15.739 - 15.834: 98.8847% ( 1) 00:12:15.974 15.834 - 15.929: 98.9157% ( 4) 00:12:15.974 15.929 - 16.024: 98.9466% ( 4) 00:12:15.974 16.024 - 16.119: 98.9699% ( 3) 00:12:15.974 16.119 - 16.213: 98.9931% ( 3) 00:12:15.974 16.213 - 16.308: 99.0241% ( 4) 00:12:15.974 16.308 - 16.403: 99.0473% ( 3) 00:12:15.974 16.403 - 16.498: 99.1170% ( 9) 00:12:15.974 16.498 - 16.593: 99.1480% ( 4) 00:12:15.974 16.593 - 16.687: 99.2100% ( 8) 00:12:15.974 16.687 - 16.782: 99.2487% ( 5) 00:12:15.974 16.782 - 16.877: 99.2564% ( 1) 00:12:15.974 16.877 - 16.972: 99.2642% ( 1) 00:12:15.974 16.972 - 17.067: 99.3029% ( 5) 00:12:15.974 17.067 - 17.161: 99.3184% ( 2) 00:12:15.974 17.161 - 17.256: 99.3262% ( 1) 00:12:15.974 17.351 - 17.446: 99.3339% ( 1) 00:12:15.974 17.541 - 17.636: 99.3494% ( 2) 00:12:15.974 17.636 - 17.730: 99.3571% ( 1) 00:12:15.974 17.730 - 17.825: 99.3649% ( 1) 00:12:15.974 17.825 - 17.920: 99.3726% ( 1) 00:12:15.974 17.920 - 18.015: 99.3804% ( 1) 00:12:15.974 18.015 - 18.110: 99.3881% ( 1) 00:12:15.974 18.204 - 18.299: 99.3959% ( 1) 00:12:15.974 18.299 - 18.394: 99.4036% ( 1) 00:12:15.974 18.489 - 18.584: 99.4114% ( 1) 00:12:15.974 3956.433 - 3980.705: 99.4191% ( 1) 00:12:15.974 3980.705 - 4004.978: 99.8141% ( 51) 00:12:15.974 4004.978 - 4029.250: 100.0000% ( 24) 00:12:15.974 00:12:15.974 13:51:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:15.974 13:51:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:15.974 13:51:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:15.974 13:51:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:15.974 13:51:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:15.974 [ 00:12:15.974 { 00:12:15.974 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:15.974 "subtype": "Discovery", 00:12:15.974 "listen_addresses": [], 00:12:15.974 "allow_any_host": true, 00:12:15.974 "hosts": [] 00:12:15.974 }, 00:12:15.974 { 00:12:15.974 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:15.974 "subtype": "NVMe", 00:12:15.974 "listen_addresses": [ 00:12:15.974 { 00:12:15.974 "trtype": "VFIOUSER", 00:12:15.974 "adrfam": "IPv4", 00:12:15.974 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:15.974 "trsvcid": "0" 00:12:15.974 } 00:12:15.974 ], 00:12:15.974 "allow_any_host": true, 00:12:15.974 "hosts": [], 00:12:15.974 "serial_number": "SPDK1", 00:12:15.974 "model_number": "SPDK bdev Controller", 00:12:15.974 "max_namespaces": 32, 00:12:15.974 "min_cntlid": 1, 00:12:15.974 "max_cntlid": 65519, 00:12:15.974 "namespaces": [ 00:12:15.974 { 00:12:15.974 "nsid": 1, 00:12:15.974 "bdev_name": "Malloc1", 00:12:15.974 "name": "Malloc1", 00:12:15.974 "nguid": "B51F7163289244A18F95D3B1196B493F", 00:12:15.974 "uuid": "b51f7163-2892-44a1-8f95-d3b1196b493f" 00:12:15.974 }, 00:12:15.974 { 00:12:15.974 "nsid": 2, 00:12:15.974 "bdev_name": "Malloc3", 00:12:15.974 "name": "Malloc3", 00:12:15.974 "nguid": "A1AC7008A6824C81982E72B24583EFB5", 00:12:15.974 "uuid": "a1ac7008-a682-4c81-982e-72b24583efb5" 00:12:15.974 } 00:12:15.974 ] 00:12:15.974 }, 00:12:15.974 { 00:12:15.974 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:15.974 "subtype": "NVMe", 00:12:15.974 "listen_addresses": [ 00:12:15.974 { 00:12:15.974 "trtype": "VFIOUSER", 00:12:15.974 "adrfam": "IPv4", 00:12:15.974 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:15.974 "trsvcid": "0" 00:12:15.974 } 00:12:15.974 ], 00:12:15.974 "allow_any_host": true, 00:12:15.974 "hosts": [], 00:12:15.974 "serial_number": "SPDK2", 00:12:15.974 "model_number": "SPDK bdev Controller", 00:12:15.974 "max_namespaces": 32, 00:12:15.974 "min_cntlid": 1, 00:12:15.974 "max_cntlid": 65519, 00:12:15.974 "namespaces": [ 00:12:15.974 { 00:12:15.974 "nsid": 1, 00:12:15.974 "bdev_name": "Malloc2", 00:12:15.974 "name": "Malloc2", 00:12:15.974 "nguid": "A33F985EDADF4BFE8C09E8756CFBE88E", 00:12:15.974 "uuid": "a33f985e-dadf-4bfe-8c09-e8756cfbe88e" 00:12:15.974 } 00:12:15.974 ] 00:12:15.974 } 00:12:15.974 ] 00:12:15.974 13:51:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:15.974 13:51:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3704968 00:12:15.974 13:51:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:15.974 13:51:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:15.974 13:51:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:15.974 13:51:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:15.974 13:51:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:15.974 13:51:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:15.974 13:51:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:15.974 13:51:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:15.974 EAL: No free 2048 kB hugepages reported on node 1 00:12:16.231 [2024-07-15 13:51:10.906223] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:16.231 Malloc4 00:12:16.231 13:51:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:16.489 [2024-07-15 13:51:11.261957] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:16.489 13:51:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:16.489 Asynchronous Event Request test 00:12:16.489 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:16.489 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:16.489 Registering asynchronous event callbacks... 00:12:16.489 Starting namespace attribute notice tests for all controllers... 00:12:16.489 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:16.489 aer_cb - Changed Namespace 00:12:16.489 Cleaning up... 00:12:16.748 [ 00:12:16.748 { 00:12:16.748 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:16.748 "subtype": "Discovery", 00:12:16.748 "listen_addresses": [], 00:12:16.748 "allow_any_host": true, 00:12:16.748 "hosts": [] 00:12:16.748 }, 00:12:16.748 { 00:12:16.748 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:16.748 "subtype": "NVMe", 00:12:16.748 "listen_addresses": [ 00:12:16.748 { 00:12:16.748 "trtype": "VFIOUSER", 00:12:16.748 "adrfam": "IPv4", 00:12:16.748 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:16.748 "trsvcid": "0" 00:12:16.748 } 00:12:16.748 ], 00:12:16.748 "allow_any_host": true, 00:12:16.748 "hosts": [], 00:12:16.748 "serial_number": "SPDK1", 00:12:16.748 "model_number": "SPDK bdev Controller", 00:12:16.748 "max_namespaces": 32, 00:12:16.748 "min_cntlid": 1, 00:12:16.748 "max_cntlid": 65519, 00:12:16.748 "namespaces": [ 00:12:16.748 { 00:12:16.748 "nsid": 1, 00:12:16.748 "bdev_name": "Malloc1", 00:12:16.748 "name": "Malloc1", 00:12:16.748 "nguid": "B51F7163289244A18F95D3B1196B493F", 00:12:16.748 "uuid": "b51f7163-2892-44a1-8f95-d3b1196b493f" 00:12:16.748 }, 00:12:16.748 { 00:12:16.748 "nsid": 2, 00:12:16.748 "bdev_name": "Malloc3", 00:12:16.748 "name": "Malloc3", 00:12:16.748 "nguid": "A1AC7008A6824C81982E72B24583EFB5", 00:12:16.748 "uuid": "a1ac7008-a682-4c81-982e-72b24583efb5" 00:12:16.748 } 00:12:16.748 ] 00:12:16.748 }, 00:12:16.748 { 00:12:16.748 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:16.748 "subtype": "NVMe", 00:12:16.748 "listen_addresses": [ 00:12:16.748 { 00:12:16.748 "trtype": "VFIOUSER", 00:12:16.748 "adrfam": "IPv4", 00:12:16.748 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:16.748 "trsvcid": "0" 00:12:16.748 } 00:12:16.748 ], 00:12:16.748 "allow_any_host": true, 00:12:16.748 "hosts": [], 00:12:16.748 "serial_number": "SPDK2", 00:12:16.748 "model_number": "SPDK bdev Controller", 00:12:16.748 "max_namespaces": 32, 00:12:16.748 "min_cntlid": 1, 00:12:16.748 "max_cntlid": 65519, 00:12:16.748 "namespaces": [ 00:12:16.748 { 00:12:16.748 "nsid": 1, 00:12:16.748 "bdev_name": "Malloc2", 00:12:16.748 "name": "Malloc2", 00:12:16.748 "nguid": "A33F985EDADF4BFE8C09E8756CFBE88E", 00:12:16.748 "uuid": "a33f985e-dadf-4bfe-8c09-e8756cfbe88e" 00:12:16.748 }, 00:12:16.748 { 00:12:16.748 "nsid": 2, 00:12:16.748 "bdev_name": "Malloc4", 00:12:16.748 "name": "Malloc4", 00:12:16.748 "nguid": "EB15E5CB1D664D8BAF40B401F3BC35B5", 00:12:16.748 "uuid": "eb15e5cb-1d66-4d8b-af40-b401f3bc35b5" 00:12:16.748 } 00:12:16.748 ] 00:12:16.748 } 00:12:16.748 ] 00:12:16.748 13:51:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3704968 00:12:16.748 13:51:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:16.748 13:51:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3698742 00:12:16.748 13:51:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3698742 ']' 00:12:16.748 13:51:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3698742 00:12:16.748 13:51:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:16.748 13:51:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:16.749 13:51:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3698742 00:12:16.749 13:51:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:16.749 13:51:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:16.749 13:51:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3698742' 00:12:16.749 killing process with pid 3698742 00:12:16.749 13:51:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3698742 00:12:16.749 13:51:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3698742 00:12:17.316 13:51:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:17.316 13:51:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:17.316 13:51:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:17.316 13:51:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:17.316 13:51:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:17.316 13:51:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3705111 00:12:17.316 13:51:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:17.316 13:51:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3705111' 00:12:17.316 Process pid: 3705111 00:12:17.316 13:51:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:17.316 13:51:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3705111 00:12:17.316 13:51:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3705111 ']' 00:12:17.316 13:51:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.316 13:51:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:17.316 13:51:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.316 13:51:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:17.316 13:51:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:17.316 [2024-07-15 13:51:11.944397] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:17.316 [2024-07-15 13:51:11.945359] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:12:17.316 [2024-07-15 13:51:11.945417] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:17.316 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.316 [2024-07-15 13:51:12.005020] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:17.316 [2024-07-15 13:51:12.117283] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.316 [2024-07-15 13:51:12.117346] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.316 [2024-07-15 13:51:12.117374] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:17.316 [2024-07-15 13:51:12.117386] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:17.316 [2024-07-15 13:51:12.117395] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.316 [2024-07-15 13:51:12.117449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.316 [2024-07-15 13:51:12.117505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:17.316 [2024-07-15 13:51:12.117572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:17.317 [2024-07-15 13:51:12.117575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.575 [2024-07-15 13:51:12.222669] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:17.575 [2024-07-15 13:51:12.222897] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:17.575 [2024-07-15 13:51:12.223184] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:17.575 [2024-07-15 13:51:12.223768] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:17.575 [2024-07-15 13:51:12.224017] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:17.575 13:51:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:17.575 13:51:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:17.575 13:51:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:18.509 13:51:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:18.766 13:51:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:18.766 13:51:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:18.766 13:51:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:18.766 13:51:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:18.766 13:51:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:19.023 Malloc1 00:12:19.023 13:51:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:19.281 13:51:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:19.538 13:51:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:19.796 13:51:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:19.796 13:51:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:19.796 13:51:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:20.055 Malloc2 00:12:20.315 13:51:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:20.315 13:51:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:20.881 13:51:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:21.139 13:51:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:21.139 13:51:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3705111 00:12:21.139 13:51:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3705111 ']' 00:12:21.139 13:51:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3705111 00:12:21.139 13:51:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:21.139 13:51:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:21.139 13:51:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3705111 00:12:21.139 13:51:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:21.139 13:51:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:21.139 13:51:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3705111' 00:12:21.139 killing process with pid 3705111 00:12:21.139 13:51:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3705111 00:12:21.139 13:51:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3705111 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:21.397 00:12:21.397 real 0m52.695s 00:12:21.397 user 3m27.701s 00:12:21.397 sys 0m4.425s 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:21.397 ************************************ 00:12:21.397 END TEST nvmf_vfio_user 00:12:21.397 ************************************ 00:12:21.397 13:51:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:21.397 13:51:16 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:21.397 13:51:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:21.397 13:51:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:21.397 13:51:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:21.397 ************************************ 00:12:21.397 START TEST nvmf_vfio_user_nvme_compliance 00:12:21.397 ************************************ 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:21.397 * Looking for test storage... 00:12:21.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3705709 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3705709' 00:12:21.397 Process pid: 3705709 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3705709 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 3705709 ']' 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:21.397 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:21.397 [2024-07-15 13:51:16.224104] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:12:21.397 [2024-07-15 13:51:16.224178] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.656 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.656 [2024-07-15 13:51:16.282336] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:21.656 [2024-07-15 13:51:16.389363] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.656 [2024-07-15 13:51:16.389416] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.656 [2024-07-15 13:51:16.389443] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.656 [2024-07-15 13:51:16.389460] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.656 [2024-07-15 13:51:16.389469] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.656 [2024-07-15 13:51:16.389561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.656 [2024-07-15 13:51:16.389664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.656 [2024-07-15 13:51:16.389667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.915 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:21.915 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:12:21.915 13:51:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:22.848 13:51:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:22.848 13:51:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:22.848 13:51:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:22.848 13:51:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.848 13:51:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:22.848 13:51:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.848 13:51:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:22.849 13:51:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:22.849 13:51:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.849 13:51:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:22.849 malloc0 00:12:22.849 13:51:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.849 13:51:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:22.849 13:51:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.849 13:51:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:22.849 13:51:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.849 13:51:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:22.849 13:51:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.849 13:51:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:22.849 13:51:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.849 13:51:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:22.849 13:51:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.849 13:51:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:22.849 13:51:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.849 13:51:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:22.849 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.108 00:12:23.108 00:12:23.108 CUnit - A unit testing framework for C - Version 2.1-3 00:12:23.108 http://cunit.sourceforge.net/ 00:12:23.108 00:12:23.108 00:12:23.108 Suite: nvme_compliance 00:12:23.109 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 13:51:17.749285] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:23.109 [2024-07-15 13:51:17.750731] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:23.109 [2024-07-15 13:51:17.750769] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:23.109 [2024-07-15 13:51:17.750799] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:23.109 [2024-07-15 13:51:17.752297] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:23.109 passed 00:12:23.109 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 13:51:17.837897] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:23.109 [2024-07-15 13:51:17.840921] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:23.109 passed 00:12:23.109 Test: admin_identify_ns ...[2024-07-15 13:51:17.926263] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:23.368 [2024-07-15 13:51:17.986755] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:23.368 [2024-07-15 13:51:17.994771] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:23.368 [2024-07-15 13:51:18.015876] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:23.368 passed 00:12:23.368 Test: admin_get_features_mandatory_features ...[2024-07-15 13:51:18.099514] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:23.368 [2024-07-15 13:51:18.102531] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:23.368 passed 00:12:23.368 Test: admin_get_features_optional_features ...[2024-07-15 13:51:18.186045] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:23.368 [2024-07-15 13:51:18.189071] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:23.625 passed 00:12:23.625 Test: admin_set_features_number_of_queues ...[2024-07-15 13:51:18.273242] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:23.625 [2024-07-15 13:51:18.377844] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:23.625 passed 00:12:23.625 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 13:51:18.458460] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:23.625 [2024-07-15 13:51:18.463487] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:23.882 passed 00:12:23.882 Test: admin_get_log_page_with_lpo ...[2024-07-15 13:51:18.547356] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:23.882 [2024-07-15 13:51:18.615767] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:23.882 [2024-07-15 13:51:18.628847] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:23.882 passed 00:12:23.882 Test: fabric_property_get ...[2024-07-15 13:51:18.709469] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:23.882 [2024-07-15 13:51:18.710733] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:23.882 [2024-07-15 13:51:18.714508] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:24.141 passed 00:12:24.141 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 13:51:18.797056] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:24.141 [2024-07-15 13:51:18.798345] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:24.141 [2024-07-15 13:51:18.802106] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:24.141 passed 00:12:24.141 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 13:51:18.883294] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:24.141 [2024-07-15 13:51:18.970749] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:24.399 [2024-07-15 13:51:18.989746] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:24.399 [2024-07-15 13:51:18.994872] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:24.399 passed 00:12:24.399 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 13:51:19.080040] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:24.399 [2024-07-15 13:51:19.081355] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:24.399 [2024-07-15 13:51:19.083077] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:24.399 passed 00:12:24.399 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 13:51:19.167605] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:24.658 [2024-07-15 13:51:19.248764] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:24.658 [2024-07-15 13:51:19.272747] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:24.658 [2024-07-15 13:51:19.277867] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:24.658 passed 00:12:24.658 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 13:51:19.361488] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:24.658 [2024-07-15 13:51:19.362820] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:24.658 [2024-07-15 13:51:19.362861] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:24.658 [2024-07-15 13:51:19.364510] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:24.658 passed 00:12:24.658 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 13:51:19.450003] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:24.955 [2024-07-15 13:51:19.540763] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:24.955 [2024-07-15 13:51:19.548759] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:24.955 [2024-07-15 13:51:19.556761] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:24.955 [2024-07-15 13:51:19.561751] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:24.955 [2024-07-15 13:51:19.593858] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:24.955 passed 00:12:24.955 Test: admin_create_io_sq_verify_pc ...[2024-07-15 13:51:19.677462] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:24.955 [2024-07-15 13:51:19.697761] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:24.955 [2024-07-15 13:51:19.726867] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:24.955 passed 00:12:25.212 Test: admin_create_io_qp_max_qps ...[2024-07-15 13:51:19.811426] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:26.148 [2024-07-15 13:51:20.920755] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:12:26.714 [2024-07-15 13:51:21.297112] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:26.714 passed 00:12:26.714 Test: admin_create_io_sq_shared_cq ...[2024-07-15 13:51:21.380439] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:26.714 [2024-07-15 13:51:21.512752] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:26.714 [2024-07-15 13:51:21.549835] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:26.973 passed 00:12:26.973 00:12:26.973 Run Summary: Type Total Ran Passed Failed Inactive 00:12:26.973 suites 1 1 n/a 0 0 00:12:26.973 tests 18 18 18 0 0 00:12:26.973 asserts 360 360 360 0 n/a 00:12:26.973 00:12:26.973 Elapsed time = 1.578 seconds 00:12:26.973 13:51:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3705709 00:12:26.973 13:51:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 3705709 ']' 00:12:26.973 13:51:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 3705709 00:12:26.973 13:51:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:12:26.973 13:51:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:26.973 13:51:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3705709 00:12:26.973 13:51:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:26.973 13:51:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:26.973 13:51:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3705709' 00:12:26.973 killing process with pid 3705709 00:12:26.973 13:51:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 3705709 00:12:26.973 13:51:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 3705709 00:12:27.233 13:51:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:27.233 13:51:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:27.233 00:12:27.233 real 0m5.829s 00:12:27.233 user 0m16.309s 00:12:27.233 sys 0m0.565s 00:12:27.233 13:51:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:27.233 13:51:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:27.233 ************************************ 00:12:27.233 END TEST nvmf_vfio_user_nvme_compliance 00:12:27.233 ************************************ 00:12:27.233 13:51:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:27.233 13:51:21 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:27.233 13:51:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:27.233 13:51:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:27.233 13:51:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:27.233 ************************************ 00:12:27.233 START TEST nvmf_vfio_user_fuzz 00:12:27.233 ************************************ 00:12:27.233 13:51:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:27.233 * Looking for test storage... 00:12:27.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:27.233 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:27.234 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:27.234 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:27.234 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:27.234 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:27.234 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:27.234 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:27.234 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:27.234 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:27.234 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3706432 00:12:27.234 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:27.234 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3706432' 00:12:27.234 Process pid: 3706432 00:12:27.234 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:27.234 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3706432 00:12:27.234 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 3706432 ']' 00:12:27.234 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.234 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:27.234 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.234 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:27.234 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:27.801 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:27.801 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:12:27.801 13:51:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:12:28.738 13:51:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:28.738 13:51:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.738 13:51:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:28.738 13:51:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.738 13:51:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:12:28.738 13:51:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:28.738 13:51:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.738 13:51:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:28.738 malloc0 00:12:28.738 13:51:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.738 13:51:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:12:28.738 13:51:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.738 13:51:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:28.738 13:51:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.738 13:51:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:28.738 13:51:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.738 13:51:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:28.738 13:51:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.738 13:51:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:28.738 13:51:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.738 13:51:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:28.738 13:51:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.738 13:51:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:12:28.738 13:51:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:00.859 Fuzzing completed. Shutting down the fuzz application 00:13:00.859 00:13:00.859 Dumping successful admin opcodes: 00:13:00.859 8, 9, 10, 24, 00:13:00.859 Dumping successful io opcodes: 00:13:00.859 0, 00:13:00.859 NS: 0x200003a1ef00 I/O qp, Total commands completed: 675463, total successful commands: 2628, random_seed: 1287024128 00:13:00.859 NS: 0x200003a1ef00 admin qp, Total commands completed: 115199, total successful commands: 940, random_seed: 1795643520 00:13:00.859 13:51:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:00.859 13:51:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.859 13:51:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:00.859 13:51:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.859 13:51:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3706432 00:13:00.859 13:51:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 3706432 ']' 00:13:00.859 13:51:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 3706432 00:13:00.859 13:51:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:13:00.859 13:51:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:00.859 13:51:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3706432 00:13:00.859 13:51:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:00.859 13:51:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:00.859 13:51:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3706432' 00:13:00.859 killing process with pid 3706432 00:13:00.859 13:51:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 3706432 00:13:00.859 13:51:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 3706432 00:13:00.859 13:51:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:00.859 13:51:54 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:00.859 00:13:00.859 real 0m32.284s 00:13:00.859 user 0m30.187s 00:13:00.859 sys 0m29.896s 00:13:00.859 13:51:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:00.859 13:51:54 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:00.859 ************************************ 00:13:00.859 END TEST nvmf_vfio_user_fuzz 00:13:00.859 ************************************ 00:13:00.859 13:51:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:00.859 13:51:54 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:00.859 13:51:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:00.859 13:51:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:00.859 13:51:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:00.859 ************************************ 00:13:00.859 START TEST nvmf_host_management 00:13:00.859 ************************************ 00:13:00.859 13:51:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:00.859 * Looking for test storage... 00:13:00.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.859 13:51:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.859 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:00.860 13:51:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:01.796 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:01.796 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:13:01.796 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:01.796 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:01.796 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:01.796 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:01.796 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:01.797 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:01.797 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:01.797 Found net devices under 0000:84:00.0: cvl_0_0 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:01.797 Found net devices under 0000:84:00.1: cvl_0_1 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:01.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:01.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:13:01.797 00:13:01.797 --- 10.0.0.2 ping statistics --- 00:13:01.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.797 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:01.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:01.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:13:01.797 00:13:01.797 --- 10.0.0.1 ping statistics --- 00:13:01.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.797 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3711905 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3711905 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3711905 ']' 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:01.797 13:51:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:02.057 [2024-07-15 13:51:56.678184] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:13:02.057 [2024-07-15 13:51:56.678267] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.057 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.057 [2024-07-15 13:51:56.743948] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:02.057 [2024-07-15 13:51:56.857280] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.057 [2024-07-15 13:51:56.857359] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.057 [2024-07-15 13:51:56.857373] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.057 [2024-07-15 13:51:56.857384] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.057 [2024-07-15 13:51:56.857394] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.057 [2024-07-15 13:51:56.857489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.057 [2024-07-15 13:51:56.857551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.057 [2024-07-15 13:51:56.857614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:02.057 [2024-07-15 13:51:56.857617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.316 13:51:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:02.316 13:51:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:02.316 13:51:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:02.316 13:51:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:02.316 13:51:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:02.316 13:51:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:02.316 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:02.316 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.316 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:02.316 [2024-07-15 13:51:57.016694] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.316 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.316 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:02.316 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:02.316 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:02.316 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:02.316 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:02.316 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:02.316 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.316 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:02.316 Malloc0 00:13:02.316 [2024-07-15 13:51:57.077827] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.316 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.316 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:02.316 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:02.316 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:02.316 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3711960 00:13:02.316 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3711960 /var/tmp/bdevperf.sock 00:13:02.316 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3711960 ']' 00:13:02.316 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:02.316 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:02.316 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:02.316 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:02.316 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:02.316 13:51:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:02.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:02.317 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:02.317 13:51:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:02.317 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:02.317 13:51:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:02.317 13:51:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:02.317 { 00:13:02.317 "params": { 00:13:02.317 "name": "Nvme$subsystem", 00:13:02.317 "trtype": "$TEST_TRANSPORT", 00:13:02.317 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:02.317 "adrfam": "ipv4", 00:13:02.317 "trsvcid": "$NVMF_PORT", 00:13:02.317 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:02.317 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:02.317 "hdgst": ${hdgst:-false}, 00:13:02.317 "ddgst": ${ddgst:-false} 00:13:02.317 }, 00:13:02.317 "method": "bdev_nvme_attach_controller" 00:13:02.317 } 00:13:02.317 EOF 00:13:02.317 )") 00:13:02.317 13:51:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:02.317 13:51:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:02.317 13:51:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:02.317 13:51:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:02.317 "params": { 00:13:02.317 "name": "Nvme0", 00:13:02.317 "trtype": "tcp", 00:13:02.317 "traddr": "10.0.0.2", 00:13:02.317 "adrfam": "ipv4", 00:13:02.317 "trsvcid": "4420", 00:13:02.317 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:02.317 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:02.317 "hdgst": false, 00:13:02.317 "ddgst": false 00:13:02.317 }, 00:13:02.317 "method": "bdev_nvme_attach_controller" 00:13:02.317 }' 00:13:02.576 [2024-07-15 13:51:57.158993] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:13:02.576 [2024-07-15 13:51:57.159082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3711960 ] 00:13:02.576 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.576 [2024-07-15 13:51:57.224437] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.576 [2024-07-15 13:51:57.335087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.834 Running I/O for 10 seconds... 00:13:02.834 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:02.834 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:02.834 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:02.834 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.834 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:02.834 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.834 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:02.834 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:02.834 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:02.834 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:02.834 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:02.834 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:02.834 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:02.834 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:02.834 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:02.834 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:02.834 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.834 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:02.834 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.834 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:13:02.834 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:13:02.834 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:13:03.091 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:13:03.091 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:03.091 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:03.091 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:03.091 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.091 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:03.352 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.352 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=542 00:13:03.352 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 542 -ge 100 ']' 00:13:03.352 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:03.352 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:03.352 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:03.352 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:03.352 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.352 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:03.352 [2024-07-15 13:51:57.960812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.352 [2024-07-15 13:51:57.960887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.352 [2024-07-15 13:51:57.960903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.352 [2024-07-15 13:51:57.960916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.352 [2024-07-15 13:51:57.960928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.352 [2024-07-15 13:51:57.960941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.352 [2024-07-15 13:51:57.960953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.352 [2024-07-15 13:51:57.960966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.352 [2024-07-15 13:51:57.960979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.352 [2024-07-15 13:51:57.960991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.352 [2024-07-15 13:51:57.961003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.352 [2024-07-15 13:51:57.961016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.352 [2024-07-15 13:51:57.961029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.352 [2024-07-15 13:51:57.961041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.352 [2024-07-15 13:51:57.961053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.352 [2024-07-15 13:51:57.961065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.352 [2024-07-15 13:51:57.961078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.352 [2024-07-15 13:51:57.961090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.352 [2024-07-15 13:51:57.961103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.352 [2024-07-15 13:51:57.961115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.352 [2024-07-15 13:51:57.961139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961450] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df3a0 is same with the state(5) to be set 00:13:03.353 [2024-07-15 13:51:57.961862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.353 [2024-07-15 13:51:57.961901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.353 [2024-07-15 13:51:57.961932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.353 [2024-07-15 13:51:57.961948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.353 [2024-07-15 13:51:57.961964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.353 [2024-07-15 13:51:57.961977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.353 [2024-07-15 13:51:57.961993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.353 [2024-07-15 13:51:57.962007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.353 [2024-07-15 13:51:57.962022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.353 [2024-07-15 13:51:57.962035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.353 [2024-07-15 13:51:57.962066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.353 [2024-07-15 13:51:57.962079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.353 [2024-07-15 13:51:57.962095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.353 [2024-07-15 13:51:57.962122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.353 [2024-07-15 13:51:57.962137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.353 [2024-07-15 13:51:57.962151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.353 [2024-07-15 13:51:57.962166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.353 [2024-07-15 13:51:57.962185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.353 [2024-07-15 13:51:57.962200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.353 [2024-07-15 13:51:57.962213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.353 [2024-07-15 13:51:57.962228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.353 [2024-07-15 13:51:57.962241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.353 [2024-07-15 13:51:57.962256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.353 [2024-07-15 13:51:57.962270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.353 [2024-07-15 13:51:57.962284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.353 [2024-07-15 13:51:57.962297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.353 [2024-07-15 13:51:57.962312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.353 [2024-07-15 13:51:57.962326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.353 [2024-07-15 13:51:57.962341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.353 [2024-07-15 13:51:57.962354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.353 [2024-07-15 13:51:57.962369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.353 [2024-07-15 13:51:57.962382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.353 [2024-07-15 13:51:57.962397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.353 [2024-07-15 13:51:57.962411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.353 [2024-07-15 13:51:57.962426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.962439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.962455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.962467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.962486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.962500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.962515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.962529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.962543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.962557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.962573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.962586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.962601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.962614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.962630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.962643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.962658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.962672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.962687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.962700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.962715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.962762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.962781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.962795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.962811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.962825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.962841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.962855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.962870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.962893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.962909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.962924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.962940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.962953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.962968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.962982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.962997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.963011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.963030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.963058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.963074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.963092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.963107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.963120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.963135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.963147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.963162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.963175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.963190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.963202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.963217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.963231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.963246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.963259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.963273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.963291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.963306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.963320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.963335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.963348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.963363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.963376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.963391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.963404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.963418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.963432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.963447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.963460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.963475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.963488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.963502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.963515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.963530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.963543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.963558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.963571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.963586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.963599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.963613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.963626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.963644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.963658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.354 [2024-07-15 13:51:57.963674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.354 [2024-07-15 13:51:57.963687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.355 [2024-07-15 13:51:57.963702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.355 [2024-07-15 13:51:57.963715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.355 [2024-07-15 13:51:57.963761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.355 [2024-07-15 13:51:57.963778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.355 [2024-07-15 13:51:57.963793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.355 [2024-07-15 13:51:57.963808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.355 [2024-07-15 13:51:57.963824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.355 [2024-07-15 13:51:57.963837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.355 [2024-07-15 13:51:57.963852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:03.355 [2024-07-15 13:51:57.963865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.355 [2024-07-15 13:51:57.963880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea3a10 is same with the state(5) to be set 00:13:03.355 [2024-07-15 13:51:57.963950] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xea3a10 was disconnected and freed. reset controller. 00:13:03.355 [2024-07-15 13:51:57.964015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.355 [2024-07-15 13:51:57.964047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.355 [2024-07-15 13:51:57.964078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.355 [2024-07-15 13:51:57.964090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.355 [2024-07-15 13:51:57.964109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.355 [2024-07-15 13:51:57.964121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.355 [2024-07-15 13:51:57.964134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.355 [2024-07-15 13:51:57.964147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.355 [2024-07-15 13:51:57.964159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa72540 is same with the state(5) to be set 00:13:03.355 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.355 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:03.355 [2024-07-15 13:51:57.965341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:03.355 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.355 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:03.355 task offset: 73728 on job bdev=Nvme0n1 fails 00:13:03.355 00:13:03.355 Latency(us) 00:13:03.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.355 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:03.355 Job: Nvme0n1 ended in about 0.40 seconds with error 00:13:03.355 Verification LBA range: start 0x0 length 0x400 00:13:03.355 Nvme0n1 : 0.40 1440.01 90.00 160.00 0.00 38861.52 6650.69 35340.89 00:13:03.355 =================================================================================================================== 00:13:03.355 Total : 1440.01 90.00 160.00 0.00 38861.52 6650.69 35340.89 00:13:03.355 [2024-07-15 13:51:57.967424] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:03.355 [2024-07-15 13:51:57.967464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa72540 (9): Bad file descriptor 00:13:03.355 13:51:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.355 [2024-07-15 13:51:57.973059] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:03.355 13:51:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:04.293 13:51:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3711960 00:13:04.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3711960) - No such process 00:13:04.293 13:51:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:04.293 13:51:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:04.293 13:51:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:04.293 13:51:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:04.293 13:51:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:04.293 13:51:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:04.293 13:51:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:04.293 13:51:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:04.293 { 00:13:04.293 "params": { 00:13:04.293 "name": "Nvme$subsystem", 00:13:04.293 "trtype": "$TEST_TRANSPORT", 00:13:04.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:04.293 "adrfam": "ipv4", 00:13:04.293 "trsvcid": "$NVMF_PORT", 00:13:04.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:04.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:04.293 "hdgst": ${hdgst:-false}, 00:13:04.293 "ddgst": ${ddgst:-false} 00:13:04.293 }, 00:13:04.293 "method": "bdev_nvme_attach_controller" 00:13:04.293 } 00:13:04.293 EOF 00:13:04.293 )") 00:13:04.293 13:51:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:04.293 13:51:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:04.293 13:51:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:04.293 13:51:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:04.293 "params": { 00:13:04.293 "name": "Nvme0", 00:13:04.293 "trtype": "tcp", 00:13:04.293 "traddr": "10.0.0.2", 00:13:04.293 "adrfam": "ipv4", 00:13:04.293 "trsvcid": "4420", 00:13:04.293 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:04.293 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:04.293 "hdgst": false, 00:13:04.293 "ddgst": false 00:13:04.293 }, 00:13:04.293 "method": "bdev_nvme_attach_controller" 00:13:04.293 }' 00:13:04.293 [2024-07-15 13:51:59.022328] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:13:04.293 [2024-07-15 13:51:59.022425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3712229 ] 00:13:04.293 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.293 [2024-07-15 13:51:59.084512] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.551 [2024-07-15 13:51:59.194277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.810 Running I/O for 1 seconds... 00:13:05.746 00:13:05.746 Latency(us) 00:13:05.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:05.746 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:05.746 Verification LBA range: start 0x0 length 0x400 00:13:05.746 Nvme0n1 : 1.01 1522.74 95.17 0.00 0.00 41367.18 6456.51 34175.81 00:13:05.746 =================================================================================================================== 00:13:05.746 Total : 1522.74 95.17 0.00 0.00 41367.18 6456.51 34175.81 00:13:06.004 13:52:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:06.005 13:52:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:06.005 13:52:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:06.005 13:52:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:06.005 13:52:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:06.005 13:52:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:06.005 13:52:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:06.005 13:52:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:06.005 13:52:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:06.005 13:52:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:06.005 13:52:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:06.005 rmmod nvme_tcp 00:13:06.005 rmmod nvme_fabrics 00:13:06.005 rmmod nvme_keyring 00:13:06.264 13:52:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:06.264 13:52:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:06.264 13:52:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:06.264 13:52:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3711905 ']' 00:13:06.264 13:52:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3711905 00:13:06.264 13:52:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 3711905 ']' 00:13:06.264 13:52:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 3711905 00:13:06.264 13:52:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:13:06.264 13:52:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:06.264 13:52:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3711905 00:13:06.264 13:52:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:06.264 13:52:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:06.264 13:52:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3711905' 00:13:06.264 killing process with pid 3711905 00:13:06.264 13:52:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 3711905 00:13:06.264 13:52:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 3711905 00:13:06.523 [2024-07-15 13:52:01.166061] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:06.523 13:52:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:06.523 13:52:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:06.523 13:52:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:06.523 13:52:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:06.523 13:52:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:06.523 13:52:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.523 13:52:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.523 13:52:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.425 13:52:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:08.425 13:52:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:08.425 00:13:08.425 real 0m8.924s 00:13:08.425 user 0m20.218s 00:13:08.425 sys 0m2.870s 00:13:08.425 13:52:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:08.425 13:52:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:08.425 ************************************ 00:13:08.425 END TEST nvmf_host_management 00:13:08.425 ************************************ 00:13:08.425 13:52:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:08.425 13:52:03 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:08.425 13:52:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:08.425 13:52:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:08.425 13:52:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:08.683 ************************************ 00:13:08.683 START TEST nvmf_lvol 00:13:08.683 ************************************ 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:08.683 * Looking for test storage... 00:13:08.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:13:08.683 13:52:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:10.585 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.585 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:10.586 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:10.586 Found net devices under 0000:84:00.0: cvl_0_0 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:10.586 Found net devices under 0000:84:00.1: cvl_0_1 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:10.586 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:10.844 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:10.844 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:10.844 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:10.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:10.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:13:10.844 00:13:10.844 --- 10.0.0.2 ping statistics --- 00:13:10.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.844 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:13:10.844 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:10.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:10.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:13:10.844 00:13:10.844 --- 10.0.0.1 ping statistics --- 00:13:10.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.844 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:13:10.844 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:10.844 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:13:10.844 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:10.844 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:10.844 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:10.844 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:10.844 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:10.844 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:10.844 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:10.844 13:52:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:10.844 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:10.844 13:52:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:10.844 13:52:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:10.844 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3714439 00:13:10.844 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:10.844 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3714439 00:13:10.844 13:52:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 3714439 ']' 00:13:10.844 13:52:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.844 13:52:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:10.844 13:52:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.844 13:52:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:10.844 13:52:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:10.844 [2024-07-15 13:52:05.533873] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:13:10.844 [2024-07-15 13:52:05.533945] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.844 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.844 [2024-07-15 13:52:05.593851] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:11.101 [2024-07-15 13:52:05.696606] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.101 [2024-07-15 13:52:05.696678] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.101 [2024-07-15 13:52:05.696692] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:11.101 [2024-07-15 13:52:05.696703] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:11.101 [2024-07-15 13:52:05.696713] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.101 [2024-07-15 13:52:05.696794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.101 [2024-07-15 13:52:05.696861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.101 [2024-07-15 13:52:05.696864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.101 13:52:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:11.101 13:52:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:13:11.101 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:11.101 13:52:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:11.101 13:52:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:11.101 13:52:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.101 13:52:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:11.358 [2024-07-15 13:52:06.055228] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:11.358 13:52:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:11.614 13:52:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:11.614 13:52:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:11.870 13:52:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:11.870 13:52:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:12.127 13:52:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:12.383 13:52:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a79481bd-a12c-43bd-a84d-c8509260707e 00:13:12.383 13:52:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a79481bd-a12c-43bd-a84d-c8509260707e lvol 20 00:13:12.640 13:52:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=710f1e92-9dda-4829-9554-3d22e8793c49 00:13:12.640 13:52:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:12.898 13:52:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 710f1e92-9dda-4829-9554-3d22e8793c49 00:13:13.155 13:52:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:13.419 [2024-07-15 13:52:08.182232] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.419 13:52:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:13.678 13:52:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3714775 00:13:13.678 13:52:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:13.678 13:52:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:13.678 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.616 13:52:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 710f1e92-9dda-4829-9554-3d22e8793c49 MY_SNAPSHOT 00:13:15.183 13:52:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7f580a27-f001-479d-ace0-b47b8c2e1c05 00:13:15.183 13:52:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 710f1e92-9dda-4829-9554-3d22e8793c49 30 00:13:15.441 13:52:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7f580a27-f001-479d-ace0-b47b8c2e1c05 MY_CLONE 00:13:15.700 13:52:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0325cee8-a55d-489a-8dcb-6b9cbcff2c62 00:13:15.700 13:52:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0325cee8-a55d-489a-8dcb-6b9cbcff2c62 00:13:16.637 13:52:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3714775 00:13:24.771 Initializing NVMe Controllers 00:13:24.771 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:24.771 Controller IO queue size 128, less than required. 00:13:24.771 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:24.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:24.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:24.771 Initialization complete. Launching workers. 00:13:24.771 ======================================================== 00:13:24.771 Latency(us) 00:13:24.771 Device Information : IOPS MiB/s Average min max 00:13:24.771 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10718.10 41.87 11947.71 2133.90 67893.35 00:13:24.771 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10607.20 41.43 12077.00 2213.58 65449.46 00:13:24.771 ======================================================== 00:13:24.772 Total : 21325.30 83.30 12012.02 2133.90 67893.35 00:13:24.772 00:13:24.772 13:52:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:24.772 13:52:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 710f1e92-9dda-4829-9554-3d22e8793c49 00:13:24.772 13:52:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a79481bd-a12c-43bd-a84d-c8509260707e 00:13:24.772 13:52:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:24.772 13:52:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:24.772 13:52:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:24.772 13:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:24.772 13:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:13:24.772 13:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:24.772 13:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:13:24.772 13:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:24.772 13:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:24.772 rmmod nvme_tcp 00:13:24.772 rmmod nvme_fabrics 00:13:24.772 rmmod nvme_keyring 00:13:24.772 13:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:24.772 13:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:13:24.772 13:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:13:24.772 13:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3714439 ']' 00:13:24.772 13:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3714439 00:13:24.772 13:52:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 3714439 ']' 00:13:24.772 13:52:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 3714439 00:13:24.772 13:52:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:13:24.772 13:52:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:24.772 13:52:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3714439 00:13:24.772 13:52:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:24.772 13:52:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:24.772 13:52:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3714439' 00:13:24.772 killing process with pid 3714439 00:13:24.772 13:52:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 3714439 00:13:24.772 13:52:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 3714439 00:13:25.341 13:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:25.341 13:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:25.341 13:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:25.341 13:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:25.341 13:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:25.341 13:52:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.341 13:52:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.341 13:52:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.304 13:52:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:27.304 00:13:27.304 real 0m18.664s 00:13:27.304 user 1m3.804s 00:13:27.304 sys 0m5.764s 00:13:27.304 13:52:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:27.304 13:52:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:27.304 ************************************ 00:13:27.304 END TEST nvmf_lvol 00:13:27.304 ************************************ 00:13:27.304 13:52:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:27.304 13:52:21 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:27.304 13:52:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:27.304 13:52:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:27.304 13:52:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:27.304 ************************************ 00:13:27.304 START TEST nvmf_lvs_grow 00:13:27.304 ************************************ 00:13:27.304 13:52:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:27.304 * Looking for test storage... 00:13:27.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:13:27.304 13:52:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:29.843 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:29.843 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:13:29.843 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:29.843 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:29.843 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:29.843 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:29.843 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:29.843 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:13:29.843 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:29.843 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:13:29.843 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:13:29.843 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:13:29.843 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:13:29.843 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:29.844 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:29.844 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:29.844 Found net devices under 0000:84:00.0: cvl_0_0 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:29.844 Found net devices under 0000:84:00.1: cvl_0_1 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:29.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:13:29.844 00:13:29.844 --- 10.0.0.2 ping statistics --- 00:13:29.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.844 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:29.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:13:29.844 00:13:29.844 --- 10.0.0.1 ping statistics --- 00:13:29.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.844 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3718067 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3718067 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 3718067 ']' 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:29.844 [2024-07-15 13:52:24.369511] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:13:29.844 [2024-07-15 13:52:24.369609] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.844 EAL: No free 2048 kB hugepages reported on node 1 00:13:29.844 [2024-07-15 13:52:24.435144] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.844 [2024-07-15 13:52:24.538350] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.844 [2024-07-15 13:52:24.538409] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.844 [2024-07-15 13:52:24.538438] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:29.844 [2024-07-15 13:52:24.538449] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:29.844 [2024-07-15 13:52:24.538458] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.844 [2024-07-15 13:52:24.538484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:29.844 13:52:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.845 13:52:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:30.103 [2024-07-15 13:52:24.880448] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.103 13:52:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:30.103 13:52:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:30.103 13:52:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:30.103 13:52:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 ************************************ 00:13:30.103 START TEST lvs_grow_clean 00:13:30.103 ************************************ 00:13:30.103 13:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:13:30.103 13:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:30.103 13:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:30.103 13:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:30.103 13:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:30.103 13:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:30.103 13:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:30.103 13:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:30.103 13:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:30.103 13:52:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:30.361 13:52:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:30.361 13:52:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:30.619 13:52:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5b205b2f-0e53-41f2-b45f-fa9c40b33215 00:13:30.619 13:52:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b205b2f-0e53-41f2-b45f-fa9c40b33215 00:13:30.619 13:52:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:30.876 13:52:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:30.876 13:52:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:30.876 13:52:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5b205b2f-0e53-41f2-b45f-fa9c40b33215 lvol 150 00:13:31.134 13:52:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7677ca54-4b3c-46be-80a6-237f02f7d984 00:13:31.134 13:52:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:31.134 13:52:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:31.393 [2024-07-15 13:52:26.160866] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:31.393 [2024-07-15 13:52:26.160956] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:31.393 true 00:13:31.393 13:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b205b2f-0e53-41f2-b45f-fa9c40b33215 00:13:31.393 13:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:31.653 13:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:31.653 13:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:31.912 13:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7677ca54-4b3c-46be-80a6-237f02f7d984 00:13:32.171 13:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:32.429 [2024-07-15 13:52:27.139855] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.429 13:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:32.687 13:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3718464 00:13:32.687 13:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:32.687 13:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:32.687 13:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3718464 /var/tmp/bdevperf.sock 00:13:32.687 13:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 3718464 ']' 00:13:32.687 13:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:32.687 13:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:32.687 13:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:32.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:32.687 13:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:32.687 13:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:32.687 [2024-07-15 13:52:27.437758] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:13:32.687 [2024-07-15 13:52:27.437827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718464 ] 00:13:32.687 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.687 [2024-07-15 13:52:27.494292] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.945 [2024-07-15 13:52:27.600869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.945 13:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:32.945 13:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:13:32.945 13:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:33.511 Nvme0n1 00:13:33.511 13:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:33.770 [ 00:13:33.770 { 00:13:33.770 "name": "Nvme0n1", 00:13:33.770 "aliases": [ 00:13:33.770 "7677ca54-4b3c-46be-80a6-237f02f7d984" 00:13:33.770 ], 00:13:33.770 "product_name": "NVMe disk", 00:13:33.770 "block_size": 4096, 00:13:33.770 "num_blocks": 38912, 00:13:33.770 "uuid": "7677ca54-4b3c-46be-80a6-237f02f7d984", 00:13:33.770 "assigned_rate_limits": { 00:13:33.770 "rw_ios_per_sec": 0, 00:13:33.770 "rw_mbytes_per_sec": 0, 00:13:33.770 "r_mbytes_per_sec": 0, 00:13:33.770 "w_mbytes_per_sec": 0 00:13:33.770 }, 00:13:33.770 "claimed": false, 00:13:33.770 "zoned": false, 00:13:33.770 "supported_io_types": { 00:13:33.770 "read": true, 00:13:33.770 "write": true, 00:13:33.770 "unmap": true, 00:13:33.770 "flush": true, 00:13:33.770 "reset": true, 00:13:33.770 "nvme_admin": true, 00:13:33.770 "nvme_io": true, 00:13:33.770 "nvme_io_md": false, 00:13:33.770 "write_zeroes": true, 00:13:33.770 "zcopy": false, 00:13:33.770 "get_zone_info": false, 00:13:33.770 "zone_management": false, 00:13:33.770 "zone_append": false, 00:13:33.770 "compare": true, 00:13:33.770 "compare_and_write": true, 00:13:33.770 "abort": true, 00:13:33.770 "seek_hole": false, 00:13:33.770 "seek_data": false, 00:13:33.770 "copy": true, 00:13:33.770 "nvme_iov_md": false 00:13:33.770 }, 00:13:33.770 "memory_domains": [ 00:13:33.770 { 00:13:33.770 "dma_device_id": "system", 00:13:33.770 "dma_device_type": 1 00:13:33.770 } 00:13:33.770 ], 00:13:33.770 "driver_specific": { 00:13:33.770 "nvme": [ 00:13:33.770 { 00:13:33.770 "trid": { 00:13:33.770 "trtype": "TCP", 00:13:33.770 "adrfam": "IPv4", 00:13:33.770 "traddr": "10.0.0.2", 00:13:33.770 "trsvcid": "4420", 00:13:33.770 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:33.770 }, 00:13:33.770 "ctrlr_data": { 00:13:33.770 "cntlid": 1, 00:13:33.770 "vendor_id": "0x8086", 00:13:33.770 "model_number": "SPDK bdev Controller", 00:13:33.770 "serial_number": "SPDK0", 00:13:33.770 "firmware_revision": "24.09", 00:13:33.770 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:33.770 "oacs": { 00:13:33.770 "security": 0, 00:13:33.770 "format": 0, 00:13:33.770 "firmware": 0, 00:13:33.770 "ns_manage": 0 00:13:33.770 }, 00:13:33.770 "multi_ctrlr": true, 00:13:33.770 "ana_reporting": false 00:13:33.770 }, 00:13:33.770 "vs": { 00:13:33.770 "nvme_version": "1.3" 00:13:33.770 }, 00:13:33.770 "ns_data": { 00:13:33.770 "id": 1, 00:13:33.770 "can_share": true 00:13:33.770 } 00:13:33.770 } 00:13:33.770 ], 00:13:33.770 "mp_policy": "active_passive" 00:13:33.770 } 00:13:33.770 } 00:13:33.770 ] 00:13:33.770 13:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3718596 00:13:33.770 13:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:33.770 13:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:33.770 Running I/O for 10 seconds... 00:13:35.147 Latency(us) 00:13:35.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:35.147 Nvme0n1 : 1.00 16606.00 64.87 0.00 0.00 0.00 0.00 0.00 00:13:35.147 =================================================================================================================== 00:13:35.147 Total : 16606.00 64.87 0.00 0.00 0.00 0.00 0.00 00:13:35.147 00:13:35.714 13:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5b205b2f-0e53-41f2-b45f-fa9c40b33215 00:13:35.714 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:35.714 Nvme0n1 : 2.00 16854.50 65.84 0.00 0.00 0.00 0.00 0.00 00:13:35.714 =================================================================================================================== 00:13:35.714 Total : 16854.50 65.84 0.00 0.00 0.00 0.00 0.00 00:13:35.714 00:13:35.972 true 00:13:35.972 13:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b205b2f-0e53-41f2-b45f-fa9c40b33215 00:13:35.972 13:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:36.229 13:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:36.229 13:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:36.229 13:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3718596 00:13:36.798 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:36.798 Nvme0n1 : 3.00 16962.67 66.26 0.00 0.00 0.00 0.00 0.00 00:13:36.798 =================================================================================================================== 00:13:36.798 Total : 16962.67 66.26 0.00 0.00 0.00 0.00 0.00 00:13:36.798 00:13:37.736 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:37.736 Nvme0n1 : 4.00 17083.25 66.73 0.00 0.00 0.00 0.00 0.00 00:13:37.736 =================================================================================================================== 00:13:37.736 Total : 17083.25 66.73 0.00 0.00 0.00 0.00 0.00 00:13:37.736 00:13:39.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:39.116 Nvme0n1 : 5.00 17133.40 66.93 0.00 0.00 0.00 0.00 0.00 00:13:39.116 =================================================================================================================== 00:13:39.116 Total : 17133.40 66.93 0.00 0.00 0.00 0.00 0.00 00:13:39.116 00:13:40.077 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:40.077 Nvme0n1 : 6.00 17205.33 67.21 0.00 0.00 0.00 0.00 0.00 00:13:40.077 =================================================================================================================== 00:13:40.077 Total : 17205.33 67.21 0.00 0.00 0.00 0.00 0.00 00:13:40.077 00:13:41.011 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:41.011 Nvme0n1 : 7.00 17271.00 67.46 0.00 0.00 0.00 0.00 0.00 00:13:41.011 =================================================================================================================== 00:13:41.011 Total : 17271.00 67.46 0.00 0.00 0.00 0.00 0.00 00:13:41.011 00:13:41.944 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:41.944 Nvme0n1 : 8.00 17306.25 67.60 0.00 0.00 0.00 0.00 0.00 00:13:41.944 =================================================================================================================== 00:13:41.944 Total : 17306.25 67.60 0.00 0.00 0.00 0.00 0.00 00:13:41.944 00:13:42.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:42.878 Nvme0n1 : 9.00 17368.11 67.84 0.00 0.00 0.00 0.00 0.00 00:13:42.878 =================================================================================================================== 00:13:42.878 Total : 17368.11 67.84 0.00 0.00 0.00 0.00 0.00 00:13:42.878 00:13:43.813 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:43.813 Nvme0n1 : 10.00 17409.90 68.01 0.00 0.00 0.00 0.00 0.00 00:13:43.813 =================================================================================================================== 00:13:43.813 Total : 17409.90 68.01 0.00 0.00 0.00 0.00 0.00 00:13:43.813 00:13:43.813 00:13:43.813 Latency(us) 00:13:43.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.813 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:43.813 Nvme0n1 : 10.00 17407.53 68.00 0.00 0.00 7347.86 2305.90 15825.73 00:13:43.813 =================================================================================================================== 00:13:43.813 Total : 17407.53 68.00 0.00 0.00 7347.86 2305.90 15825.73 00:13:43.813 0 00:13:43.813 13:52:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3718464 00:13:43.813 13:52:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 3718464 ']' 00:13:43.813 13:52:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 3718464 00:13:43.813 13:52:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:13:43.813 13:52:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:43.813 13:52:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3718464 00:13:43.813 13:52:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:43.813 13:52:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:43.813 13:52:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3718464' 00:13:43.813 killing process with pid 3718464 00:13:43.813 13:52:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 3718464 00:13:43.813 Received shutdown signal, test time was about 10.000000 seconds 00:13:43.813 00:13:43.813 Latency(us) 00:13:43.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.813 =================================================================================================================== 00:13:43.813 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:43.813 13:52:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 3718464 00:13:44.070 13:52:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:44.643 13:52:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:44.643 13:52:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b205b2f-0e53-41f2-b45f-fa9c40b33215 00:13:44.643 13:52:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:44.902 13:52:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:44.902 13:52:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:44.902 13:52:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:45.161 [2024-07-15 13:52:39.903465] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:45.161 13:52:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b205b2f-0e53-41f2-b45f-fa9c40b33215 00:13:45.161 13:52:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:13:45.161 13:52:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b205b2f-0e53-41f2-b45f-fa9c40b33215 00:13:45.161 13:52:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:45.161 13:52:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:45.161 13:52:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:45.161 13:52:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:45.161 13:52:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:45.161 13:52:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:45.161 13:52:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:45.161 13:52:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:45.161 13:52:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b205b2f-0e53-41f2-b45f-fa9c40b33215 00:13:45.419 request: 00:13:45.420 { 00:13:45.420 "uuid": "5b205b2f-0e53-41f2-b45f-fa9c40b33215", 00:13:45.420 "method": "bdev_lvol_get_lvstores", 00:13:45.420 "req_id": 1 00:13:45.420 } 00:13:45.420 Got JSON-RPC error response 00:13:45.420 response: 00:13:45.420 { 00:13:45.420 "code": -19, 00:13:45.420 "message": "No such device" 00:13:45.420 } 00:13:45.420 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:13:45.420 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:45.420 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:45.420 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:45.420 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:45.677 aio_bdev 00:13:45.677 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7677ca54-4b3c-46be-80a6-237f02f7d984 00:13:45.677 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=7677ca54-4b3c-46be-80a6-237f02f7d984 00:13:45.677 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:45.677 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:13:45.677 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:45.677 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:45.677 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:45.935 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7677ca54-4b3c-46be-80a6-237f02f7d984 -t 2000 00:13:46.195 [ 00:13:46.195 { 00:13:46.195 "name": "7677ca54-4b3c-46be-80a6-237f02f7d984", 00:13:46.195 "aliases": [ 00:13:46.195 "lvs/lvol" 00:13:46.195 ], 00:13:46.195 "product_name": "Logical Volume", 00:13:46.195 "block_size": 4096, 00:13:46.195 "num_blocks": 38912, 00:13:46.195 "uuid": "7677ca54-4b3c-46be-80a6-237f02f7d984", 00:13:46.195 "assigned_rate_limits": { 00:13:46.195 "rw_ios_per_sec": 0, 00:13:46.195 "rw_mbytes_per_sec": 0, 00:13:46.195 "r_mbytes_per_sec": 0, 00:13:46.195 "w_mbytes_per_sec": 0 00:13:46.195 }, 00:13:46.195 "claimed": false, 00:13:46.195 "zoned": false, 00:13:46.195 "supported_io_types": { 00:13:46.195 "read": true, 00:13:46.195 "write": true, 00:13:46.195 "unmap": true, 00:13:46.195 "flush": false, 00:13:46.195 "reset": true, 00:13:46.195 "nvme_admin": false, 00:13:46.195 "nvme_io": false, 00:13:46.195 "nvme_io_md": false, 00:13:46.195 "write_zeroes": true, 00:13:46.195 "zcopy": false, 00:13:46.195 "get_zone_info": false, 00:13:46.195 "zone_management": false, 00:13:46.195 "zone_append": false, 00:13:46.195 "compare": false, 00:13:46.195 "compare_and_write": false, 00:13:46.195 "abort": false, 00:13:46.195 "seek_hole": true, 00:13:46.195 "seek_data": true, 00:13:46.195 "copy": false, 00:13:46.195 "nvme_iov_md": false 00:13:46.195 }, 00:13:46.195 "driver_specific": { 00:13:46.195 "lvol": { 00:13:46.195 "lvol_store_uuid": "5b205b2f-0e53-41f2-b45f-fa9c40b33215", 00:13:46.195 "base_bdev": "aio_bdev", 00:13:46.195 "thin_provision": false, 00:13:46.195 "num_allocated_clusters": 38, 00:13:46.195 "snapshot": false, 00:13:46.195 "clone": false, 00:13:46.195 "esnap_clone": false 00:13:46.195 } 00:13:46.195 } 00:13:46.195 } 00:13:46.195 ] 00:13:46.195 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:13:46.195 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b205b2f-0e53-41f2-b45f-fa9c40b33215 00:13:46.195 13:52:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:46.454 13:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:46.454 13:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b205b2f-0e53-41f2-b45f-fa9c40b33215 00:13:46.454 13:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:46.712 13:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:46.712 13:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7677ca54-4b3c-46be-80a6-237f02f7d984 00:13:46.971 13:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5b205b2f-0e53-41f2-b45f-fa9c40b33215 00:13:47.231 13:52:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:47.489 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:47.489 00:13:47.489 real 0m17.212s 00:13:47.489 user 0m16.735s 00:13:47.489 sys 0m1.895s 00:13:47.490 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:47.490 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:47.490 ************************************ 00:13:47.490 END TEST lvs_grow_clean 00:13:47.490 ************************************ 00:13:47.490 13:52:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:47.490 13:52:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:47.490 13:52:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:47.490 13:52:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:47.490 13:52:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:47.490 ************************************ 00:13:47.490 START TEST lvs_grow_dirty 00:13:47.490 ************************************ 00:13:47.490 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:13:47.490 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:47.490 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:47.490 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:47.490 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:47.490 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:47.490 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:47.490 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:47.490 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:47.490 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:47.749 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:47.749 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:48.008 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b057c083-94c4-4b05-ad81-2524b4298adf 00:13:48.008 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b057c083-94c4-4b05-ad81-2524b4298adf 00:13:48.008 13:52:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:48.266 13:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:48.266 13:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:48.266 13:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b057c083-94c4-4b05-ad81-2524b4298adf lvol 150 00:13:48.525 13:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b5e850a5-39ce-4032-bb72-92ab1def67f3 00:13:48.526 13:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:48.526 13:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:48.784 [2024-07-15 13:52:43.474857] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:48.784 [2024-07-15 13:52:43.474948] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:48.784 true 00:13:48.784 13:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:48.784 13:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b057c083-94c4-4b05-ad81-2524b4298adf 00:13:49.042 13:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:49.042 13:52:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:49.302 13:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b5e850a5-39ce-4032-bb72-92ab1def67f3 00:13:49.560 13:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:49.829 [2024-07-15 13:52:44.497978] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.829 13:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:50.097 13:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3720611 00:13:50.097 13:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:50.097 13:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:50.097 13:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3720611 /var/tmp/bdevperf.sock 00:13:50.097 13:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3720611 ']' 00:13:50.097 13:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:50.097 13:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:50.097 13:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:50.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:50.097 13:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:50.097 13:52:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:50.097 [2024-07-15 13:52:44.796862] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:13:50.097 [2024-07-15 13:52:44.796937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3720611 ] 00:13:50.097 EAL: No free 2048 kB hugepages reported on node 1 00:13:50.097 [2024-07-15 13:52:44.854513] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.360 [2024-07-15 13:52:44.961101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.360 13:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:50.360 13:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:50.360 13:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:50.621 Nvme0n1 00:13:50.621 13:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:50.878 [ 00:13:50.878 { 00:13:50.878 "name": "Nvme0n1", 00:13:50.878 "aliases": [ 00:13:50.878 "b5e850a5-39ce-4032-bb72-92ab1def67f3" 00:13:50.878 ], 00:13:50.878 "product_name": "NVMe disk", 00:13:50.878 "block_size": 4096, 00:13:50.878 "num_blocks": 38912, 00:13:50.878 "uuid": "b5e850a5-39ce-4032-bb72-92ab1def67f3", 00:13:50.878 "assigned_rate_limits": { 00:13:50.878 "rw_ios_per_sec": 0, 00:13:50.878 "rw_mbytes_per_sec": 0, 00:13:50.878 "r_mbytes_per_sec": 0, 00:13:50.878 "w_mbytes_per_sec": 0 00:13:50.878 }, 00:13:50.879 "claimed": false, 00:13:50.879 "zoned": false, 00:13:50.879 "supported_io_types": { 00:13:50.879 "read": true, 00:13:50.879 "write": true, 00:13:50.879 "unmap": true, 00:13:50.879 "flush": true, 00:13:50.879 "reset": true, 00:13:50.879 "nvme_admin": true, 00:13:50.879 "nvme_io": true, 00:13:50.879 "nvme_io_md": false, 00:13:50.879 "write_zeroes": true, 00:13:50.879 "zcopy": false, 00:13:50.879 "get_zone_info": false, 00:13:50.879 "zone_management": false, 00:13:50.879 "zone_append": false, 00:13:50.879 "compare": true, 00:13:50.879 "compare_and_write": true, 00:13:50.879 "abort": true, 00:13:50.879 "seek_hole": false, 00:13:50.879 "seek_data": false, 00:13:50.879 "copy": true, 00:13:50.879 "nvme_iov_md": false 00:13:50.879 }, 00:13:50.879 "memory_domains": [ 00:13:50.879 { 00:13:50.879 "dma_device_id": "system", 00:13:50.879 "dma_device_type": 1 00:13:50.879 } 00:13:50.879 ], 00:13:50.879 "driver_specific": { 00:13:50.879 "nvme": [ 00:13:50.879 { 00:13:50.879 "trid": { 00:13:50.879 "trtype": "TCP", 00:13:50.879 "adrfam": "IPv4", 00:13:50.879 "traddr": "10.0.0.2", 00:13:50.879 "trsvcid": "4420", 00:13:50.879 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:50.879 }, 00:13:50.879 "ctrlr_data": { 00:13:50.879 "cntlid": 1, 00:13:50.879 "vendor_id": "0x8086", 00:13:50.879 "model_number": "SPDK bdev Controller", 00:13:50.879 "serial_number": "SPDK0", 00:13:50.879 "firmware_revision": "24.09", 00:13:50.879 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:50.879 "oacs": { 00:13:50.879 "security": 0, 00:13:50.879 "format": 0, 00:13:50.879 "firmware": 0, 00:13:50.879 "ns_manage": 0 00:13:50.879 }, 00:13:50.879 "multi_ctrlr": true, 00:13:50.879 "ana_reporting": false 00:13:50.879 }, 00:13:50.879 "vs": { 00:13:50.879 "nvme_version": "1.3" 00:13:50.879 }, 00:13:50.879 "ns_data": { 00:13:50.879 "id": 1, 00:13:50.879 "can_share": true 00:13:50.879 } 00:13:50.879 } 00:13:50.879 ], 00:13:50.879 "mp_policy": "active_passive" 00:13:50.879 } 00:13:50.879 } 00:13:50.879 ] 00:13:50.879 13:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3720653 00:13:50.879 13:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:50.879 13:52:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:51.136 Running I/O for 10 seconds... 00:13:52.068 Latency(us) 00:13:52.068 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.068 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:52.068 Nvme0n1 : 1.00 16854.00 65.84 0.00 0.00 0.00 0.00 0.00 00:13:52.068 =================================================================================================================== 00:13:52.068 Total : 16854.00 65.84 0.00 0.00 0.00 0.00 0.00 00:13:52.068 00:13:53.001 13:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b057c083-94c4-4b05-ad81-2524b4298adf 00:13:53.001 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:53.001 Nvme0n1 : 2.00 17016.00 66.47 0.00 0.00 0.00 0.00 0.00 00:13:53.001 =================================================================================================================== 00:13:53.001 Total : 17016.00 66.47 0.00 0.00 0.00 0.00 0.00 00:13:53.001 00:13:53.259 true 00:13:53.259 13:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b057c083-94c4-4b05-ad81-2524b4298adf 00:13:53.259 13:52:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:53.516 13:52:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:53.516 13:52:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:53.516 13:52:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3720653 00:13:54.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:54.082 Nvme0n1 : 3.00 17107.67 66.83 0.00 0.00 0.00 0.00 0.00 00:13:54.082 =================================================================================================================== 00:13:54.082 Total : 17107.67 66.83 0.00 0.00 0.00 0.00 0.00 00:13:54.082 00:13:55.013 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:55.013 Nvme0n1 : 4.00 17222.50 67.28 0.00 0.00 0.00 0.00 0.00 00:13:55.013 =================================================================================================================== 00:13:55.013 Total : 17222.50 67.28 0.00 0.00 0.00 0.00 0.00 00:13:55.013 00:13:55.945 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:55.945 Nvme0n1 : 5.00 17289.40 67.54 0.00 0.00 0.00 0.00 0.00 00:13:55.945 =================================================================================================================== 00:13:55.945 Total : 17289.40 67.54 0.00 0.00 0.00 0.00 0.00 00:13:55.945 00:13:57.315 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:57.315 Nvme0n1 : 6.00 17359.83 67.81 0.00 0.00 0.00 0.00 0.00 00:13:57.315 =================================================================================================================== 00:13:57.315 Total : 17359.83 67.81 0.00 0.00 0.00 0.00 0.00 00:13:57.315 00:13:58.272 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:58.272 Nvme0n1 : 7.00 17413.86 68.02 0.00 0.00 0.00 0.00 0.00 00:13:58.272 =================================================================================================================== 00:13:58.272 Total : 17413.86 68.02 0.00 0.00 0.00 0.00 0.00 00:13:58.272 00:13:59.262 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:59.262 Nvme0n1 : 8.00 17417.38 68.04 0.00 0.00 0.00 0.00 0.00 00:13:59.262 =================================================================================================================== 00:13:59.262 Total : 17417.38 68.04 0.00 0.00 0.00 0.00 0.00 00:13:59.262 00:14:00.191 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:00.191 Nvme0n1 : 9.00 17431.11 68.09 0.00 0.00 0.00 0.00 0.00 00:14:00.191 =================================================================================================================== 00:14:00.191 Total : 17431.11 68.09 0.00 0.00 0.00 0.00 0.00 00:14:00.191 00:14:01.122 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:01.122 Nvme0n1 : 10.00 17471.90 68.25 0.00 0.00 0.00 0.00 0.00 00:14:01.122 =================================================================================================================== 00:14:01.122 Total : 17471.90 68.25 0.00 0.00 0.00 0.00 0.00 00:14:01.122 00:14:01.122 00:14:01.122 Latency(us) 00:14:01.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.122 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:01.122 Nvme0n1 : 10.01 17473.12 68.25 0.00 0.00 7321.08 4344.79 14951.92 00:14:01.122 =================================================================================================================== 00:14:01.122 Total : 17473.12 68.25 0.00 0.00 7321.08 4344.79 14951.92 00:14:01.122 0 00:14:01.122 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3720611 00:14:01.122 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 3720611 ']' 00:14:01.122 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 3720611 00:14:01.122 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:14:01.122 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:01.122 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3720611 00:14:01.122 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:01.122 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:01.122 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3720611' 00:14:01.122 killing process with pid 3720611 00:14:01.122 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 3720611 00:14:01.122 Received shutdown signal, test time was about 10.000000 seconds 00:14:01.122 00:14:01.122 Latency(us) 00:14:01.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.122 =================================================================================================================== 00:14:01.122 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:01.122 13:52:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 3720611 00:14:01.380 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:01.637 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:01.894 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b057c083-94c4-4b05-ad81-2524b4298adf 00:14:01.894 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:02.152 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:02.152 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:02.152 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3718067 00:14:02.152 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3718067 00:14:02.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3718067 Killed "${NVMF_APP[@]}" "$@" 00:14:02.152 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:02.152 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:02.152 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:02.152 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:02.152 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:02.152 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3721981 00:14:02.152 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3721981 00:14:02.152 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:02.152 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3721981 ']' 00:14:02.152 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.152 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:02.152 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.152 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:02.152 13:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:02.152 [2024-07-15 13:52:56.913291] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:14:02.152 [2024-07-15 13:52:56.913375] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.152 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.152 [2024-07-15 13:52:56.978312] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.410 [2024-07-15 13:52:57.088798] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.410 [2024-07-15 13:52:57.088863] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.410 [2024-07-15 13:52:57.088876] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.410 [2024-07-15 13:52:57.088887] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.410 [2024-07-15 13:52:57.088896] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.410 [2024-07-15 13:52:57.088925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.410 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:02.410 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:02.410 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:02.410 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:02.410 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:02.410 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.410 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:02.667 [2024-07-15 13:52:57.461475] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:02.667 [2024-07-15 13:52:57.461599] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:02.667 [2024-07-15 13:52:57.461645] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:02.667 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:02.667 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b5e850a5-39ce-4032-bb72-92ab1def67f3 00:14:02.667 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=b5e850a5-39ce-4032-bb72-92ab1def67f3 00:14:02.667 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:02.667 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:02.667 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:02.667 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:02.667 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:02.924 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b5e850a5-39ce-4032-bb72-92ab1def67f3 -t 2000 00:14:03.181 [ 00:14:03.181 { 00:14:03.181 "name": "b5e850a5-39ce-4032-bb72-92ab1def67f3", 00:14:03.181 "aliases": [ 00:14:03.181 "lvs/lvol" 00:14:03.181 ], 00:14:03.181 "product_name": "Logical Volume", 00:14:03.181 "block_size": 4096, 00:14:03.181 "num_blocks": 38912, 00:14:03.181 "uuid": "b5e850a5-39ce-4032-bb72-92ab1def67f3", 00:14:03.181 "assigned_rate_limits": { 00:14:03.181 "rw_ios_per_sec": 0, 00:14:03.181 "rw_mbytes_per_sec": 0, 00:14:03.181 "r_mbytes_per_sec": 0, 00:14:03.181 "w_mbytes_per_sec": 0 00:14:03.181 }, 00:14:03.181 "claimed": false, 00:14:03.181 "zoned": false, 00:14:03.181 "supported_io_types": { 00:14:03.181 "read": true, 00:14:03.181 "write": true, 00:14:03.181 "unmap": true, 00:14:03.181 "flush": false, 00:14:03.181 "reset": true, 00:14:03.181 "nvme_admin": false, 00:14:03.181 "nvme_io": false, 00:14:03.181 "nvme_io_md": false, 00:14:03.181 "write_zeroes": true, 00:14:03.181 "zcopy": false, 00:14:03.181 "get_zone_info": false, 00:14:03.181 "zone_management": false, 00:14:03.181 "zone_append": false, 00:14:03.181 "compare": false, 00:14:03.181 "compare_and_write": false, 00:14:03.181 "abort": false, 00:14:03.181 "seek_hole": true, 00:14:03.181 "seek_data": true, 00:14:03.181 "copy": false, 00:14:03.181 "nvme_iov_md": false 00:14:03.181 }, 00:14:03.181 "driver_specific": { 00:14:03.181 "lvol": { 00:14:03.181 "lvol_store_uuid": "b057c083-94c4-4b05-ad81-2524b4298adf", 00:14:03.181 "base_bdev": "aio_bdev", 00:14:03.181 "thin_provision": false, 00:14:03.181 "num_allocated_clusters": 38, 00:14:03.181 "snapshot": false, 00:14:03.181 "clone": false, 00:14:03.181 "esnap_clone": false 00:14:03.181 } 00:14:03.181 } 00:14:03.181 } 00:14:03.181 ] 00:14:03.182 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:03.182 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b057c083-94c4-4b05-ad81-2524b4298adf 00:14:03.182 13:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:03.438 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:03.438 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b057c083-94c4-4b05-ad81-2524b4298adf 00:14:03.438 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:03.695 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:03.695 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:03.961 [2024-07-15 13:52:58.710752] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:03.961 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b057c083-94c4-4b05-ad81-2524b4298adf 00:14:03.961 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:14:03.961 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b057c083-94c4-4b05-ad81-2524b4298adf 00:14:03.961 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:03.961 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.961 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:03.961 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.961 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:03.961 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.961 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:03.961 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:03.961 13:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b057c083-94c4-4b05-ad81-2524b4298adf 00:14:04.218 request: 00:14:04.218 { 00:14:04.218 "uuid": "b057c083-94c4-4b05-ad81-2524b4298adf", 00:14:04.218 "method": "bdev_lvol_get_lvstores", 00:14:04.218 "req_id": 1 00:14:04.218 } 00:14:04.218 Got JSON-RPC error response 00:14:04.218 response: 00:14:04.219 { 00:14:04.219 "code": -19, 00:14:04.219 "message": "No such device" 00:14:04.219 } 00:14:04.219 13:52:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:14:04.219 13:52:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:04.219 13:52:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:04.219 13:52:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:04.219 13:52:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:04.476 aio_bdev 00:14:04.476 13:52:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b5e850a5-39ce-4032-bb72-92ab1def67f3 00:14:04.476 13:52:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=b5e850a5-39ce-4032-bb72-92ab1def67f3 00:14:04.476 13:52:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:04.476 13:52:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:04.476 13:52:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:04.476 13:52:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:04.476 13:52:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:04.732 13:52:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b5e850a5-39ce-4032-bb72-92ab1def67f3 -t 2000 00:14:04.989 [ 00:14:04.989 { 00:14:04.989 "name": "b5e850a5-39ce-4032-bb72-92ab1def67f3", 00:14:04.989 "aliases": [ 00:14:04.989 "lvs/lvol" 00:14:04.989 ], 00:14:04.989 "product_name": "Logical Volume", 00:14:04.989 "block_size": 4096, 00:14:04.989 "num_blocks": 38912, 00:14:04.989 "uuid": "b5e850a5-39ce-4032-bb72-92ab1def67f3", 00:14:04.989 "assigned_rate_limits": { 00:14:04.989 "rw_ios_per_sec": 0, 00:14:04.989 "rw_mbytes_per_sec": 0, 00:14:04.989 "r_mbytes_per_sec": 0, 00:14:04.989 "w_mbytes_per_sec": 0 00:14:04.989 }, 00:14:04.989 "claimed": false, 00:14:04.989 "zoned": false, 00:14:04.989 "supported_io_types": { 00:14:04.989 "read": true, 00:14:04.989 "write": true, 00:14:04.989 "unmap": true, 00:14:04.989 "flush": false, 00:14:04.989 "reset": true, 00:14:04.989 "nvme_admin": false, 00:14:04.989 "nvme_io": false, 00:14:04.989 "nvme_io_md": false, 00:14:04.989 "write_zeroes": true, 00:14:04.989 "zcopy": false, 00:14:04.989 "get_zone_info": false, 00:14:04.989 "zone_management": false, 00:14:04.989 "zone_append": false, 00:14:04.989 "compare": false, 00:14:04.989 "compare_and_write": false, 00:14:04.989 "abort": false, 00:14:04.989 "seek_hole": true, 00:14:04.989 "seek_data": true, 00:14:04.989 "copy": false, 00:14:04.989 "nvme_iov_md": false 00:14:04.989 }, 00:14:04.989 "driver_specific": { 00:14:04.989 "lvol": { 00:14:04.989 "lvol_store_uuid": "b057c083-94c4-4b05-ad81-2524b4298adf", 00:14:04.989 "base_bdev": "aio_bdev", 00:14:04.989 "thin_provision": false, 00:14:04.989 "num_allocated_clusters": 38, 00:14:04.989 "snapshot": false, 00:14:04.989 "clone": false, 00:14:04.989 "esnap_clone": false 00:14:04.989 } 00:14:04.989 } 00:14:04.989 } 00:14:04.989 ] 00:14:05.246 13:52:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:05.246 13:52:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b057c083-94c4-4b05-ad81-2524b4298adf 00:14:05.246 13:52:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:05.504 13:53:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:05.504 13:53:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b057c083-94c4-4b05-ad81-2524b4298adf 00:14:05.504 13:53:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:05.504 13:53:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:05.504 13:53:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b5e850a5-39ce-4032-bb72-92ab1def67f3 00:14:05.761 13:53:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b057c083-94c4-4b05-ad81-2524b4298adf 00:14:06.018 13:53:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:06.276 13:53:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:06.276 00:14:06.276 real 0m18.891s 00:14:06.276 user 0m47.454s 00:14:06.276 sys 0m5.019s 00:14:06.276 13:53:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:06.276 13:53:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:06.276 ************************************ 00:14:06.276 END TEST lvs_grow_dirty 00:14:06.276 ************************************ 00:14:06.276 13:53:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:06.276 13:53:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:06.276 13:53:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:14:06.276 13:53:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:14:06.276 13:53:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:06.276 13:53:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:06.276 13:53:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:06.276 13:53:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:06.276 13:53:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:06.276 13:53:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:06.594 nvmf_trace.0 00:14:06.594 13:53:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:14:06.594 13:53:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:06.594 13:53:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:06.594 13:53:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:14:06.594 13:53:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:06.594 13:53:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:14:06.594 13:53:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:06.594 13:53:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:06.594 rmmod nvme_tcp 00:14:06.594 rmmod nvme_fabrics 00:14:06.594 rmmod nvme_keyring 00:14:06.594 13:53:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:06.594 13:53:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:14:06.594 13:53:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:14:06.594 13:53:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3721981 ']' 00:14:06.594 13:53:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3721981 00:14:06.594 13:53:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 3721981 ']' 00:14:06.594 13:53:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 3721981 00:14:06.594 13:53:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:14:06.594 13:53:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:06.594 13:53:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3721981 00:14:06.594 13:53:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:06.594 13:53:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:06.594 13:53:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3721981' 00:14:06.594 killing process with pid 3721981 00:14:06.594 13:53:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 3721981 00:14:06.594 13:53:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 3721981 00:14:06.853 13:53:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:06.853 13:53:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:06.853 13:53:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:06.853 13:53:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:06.853 13:53:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:06.853 13:53:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.853 13:53:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:06.853 13:53:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.759 13:53:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:08.759 00:14:08.759 real 0m41.537s 00:14:08.759 user 1m9.907s 00:14:08.759 sys 0m8.852s 00:14:08.759 13:53:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:08.759 13:53:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:08.759 ************************************ 00:14:08.759 END TEST nvmf_lvs_grow 00:14:08.759 ************************************ 00:14:08.759 13:53:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:08.759 13:53:03 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:08.759 13:53:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:08.759 13:53:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:08.759 13:53:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:08.759 ************************************ 00:14:08.759 START TEST nvmf_bdev_io_wait 00:14:08.759 ************************************ 00:14:08.759 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:09.017 * Looking for test storage... 00:14:09.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:09.017 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.017 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:09.017 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.017 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.017 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.017 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.017 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.017 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.017 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.017 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.017 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.017 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.017 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:09.017 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:09.017 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.017 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.017 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.017 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.017 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.017 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.017 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.017 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.017 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.018 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.018 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.018 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:09.018 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.018 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:14:09.018 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:09.018 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:09.018 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.018 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.018 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.018 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:09.018 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:09.018 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:09.018 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:09.018 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:09.018 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:09.018 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:09.018 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.018 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:09.018 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:09.018 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:09.018 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.018 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.018 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.018 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:09.018 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:09.018 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:14:09.018 13:53:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:10.920 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:10.920 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.920 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:10.921 Found net devices under 0000:84:00.0: cvl_0_0 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:10.921 Found net devices under 0000:84:00.1: cvl_0_1 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:10.921 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:11.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:11.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:14:11.179 00:14:11.179 --- 10.0.0.2 ping statistics --- 00:14:11.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.179 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:11.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:11.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:14:11.179 00:14:11.179 --- 10.0.0.1 ping statistics --- 00:14:11.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.179 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3724522 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3724522 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 3724522 ']' 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:11.179 13:53:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:11.179 [2024-07-15 13:53:05.948383] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:14:11.179 [2024-07-15 13:53:05.948476] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.179 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.179 [2024-07-15 13:53:06.015603] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:11.438 [2024-07-15 13:53:06.124465] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.438 [2024-07-15 13:53:06.124530] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.438 [2024-07-15 13:53:06.124552] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.438 [2024-07-15 13:53:06.124563] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.438 [2024-07-15 13:53:06.124573] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.438 [2024-07-15 13:53:06.124659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.438 [2024-07-15 13:53:06.124733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.438 [2024-07-15 13:53:06.124771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:11.438 [2024-07-15 13:53:06.124779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.379 13:53:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:12.379 13:53:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:14:12.379 13:53:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:12.379 13:53:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:12.379 13:53:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:12.379 13:53:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:12.379 13:53:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:12.379 13:53:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.379 13:53:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:12.379 13:53:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.379 13:53:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:12.379 13:53:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.379 13:53:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:12.379 [2024-07-15 13:53:07.011750] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:12.379 Malloc0 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:12.379 [2024-07-15 13:53:07.075440] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3724674 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3724675 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3724677 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:12.379 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:12.380 { 00:14:12.380 "params": { 00:14:12.380 "name": "Nvme$subsystem", 00:14:12.380 "trtype": "$TEST_TRANSPORT", 00:14:12.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:12.380 "adrfam": "ipv4", 00:14:12.380 "trsvcid": "$NVMF_PORT", 00:14:12.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:12.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:12.380 "hdgst": ${hdgst:-false}, 00:14:12.380 "ddgst": ${ddgst:-false} 00:14:12.380 }, 00:14:12.380 "method": "bdev_nvme_attach_controller" 00:14:12.380 } 00:14:12.380 EOF 00:14:12.380 )") 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3724680 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:12.380 { 00:14:12.380 "params": { 00:14:12.380 "name": "Nvme$subsystem", 00:14:12.380 "trtype": "$TEST_TRANSPORT", 00:14:12.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:12.380 "adrfam": "ipv4", 00:14:12.380 "trsvcid": "$NVMF_PORT", 00:14:12.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:12.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:12.380 "hdgst": ${hdgst:-false}, 00:14:12.380 "ddgst": ${ddgst:-false} 00:14:12.380 }, 00:14:12.380 "method": "bdev_nvme_attach_controller" 00:14:12.380 } 00:14:12.380 EOF 00:14:12.380 )") 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:12.380 { 00:14:12.380 "params": { 00:14:12.380 "name": "Nvme$subsystem", 00:14:12.380 "trtype": "$TEST_TRANSPORT", 00:14:12.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:12.380 "adrfam": "ipv4", 00:14:12.380 "trsvcid": "$NVMF_PORT", 00:14:12.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:12.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:12.380 "hdgst": ${hdgst:-false}, 00:14:12.380 "ddgst": ${ddgst:-false} 00:14:12.380 }, 00:14:12.380 "method": "bdev_nvme_attach_controller" 00:14:12.380 } 00:14:12.380 EOF 00:14:12.380 )") 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:12.380 { 00:14:12.380 "params": { 00:14:12.380 "name": "Nvme$subsystem", 00:14:12.380 "trtype": "$TEST_TRANSPORT", 00:14:12.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:12.380 "adrfam": "ipv4", 00:14:12.380 "trsvcid": "$NVMF_PORT", 00:14:12.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:12.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:12.380 "hdgst": ${hdgst:-false}, 00:14:12.380 "ddgst": ${ddgst:-false} 00:14:12.380 }, 00:14:12.380 "method": "bdev_nvme_attach_controller" 00:14:12.380 } 00:14:12.380 EOF 00:14:12.380 )") 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3724674 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:12.380 "params": { 00:14:12.380 "name": "Nvme1", 00:14:12.380 "trtype": "tcp", 00:14:12.380 "traddr": "10.0.0.2", 00:14:12.380 "adrfam": "ipv4", 00:14:12.380 "trsvcid": "4420", 00:14:12.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.380 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:12.380 "hdgst": false, 00:14:12.380 "ddgst": false 00:14:12.380 }, 00:14:12.380 "method": "bdev_nvme_attach_controller" 00:14:12.380 }' 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:12.380 "params": { 00:14:12.380 "name": "Nvme1", 00:14:12.380 "trtype": "tcp", 00:14:12.380 "traddr": "10.0.0.2", 00:14:12.380 "adrfam": "ipv4", 00:14:12.380 "trsvcid": "4420", 00:14:12.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.380 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:12.380 "hdgst": false, 00:14:12.380 "ddgst": false 00:14:12.380 }, 00:14:12.380 "method": "bdev_nvme_attach_controller" 00:14:12.380 }' 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:12.380 "params": { 00:14:12.380 "name": "Nvme1", 00:14:12.380 "trtype": "tcp", 00:14:12.380 "traddr": "10.0.0.2", 00:14:12.380 "adrfam": "ipv4", 00:14:12.380 "trsvcid": "4420", 00:14:12.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.380 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:12.380 "hdgst": false, 00:14:12.380 "ddgst": false 00:14:12.380 }, 00:14:12.380 "method": "bdev_nvme_attach_controller" 00:14:12.380 }' 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:12.380 13:53:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:12.380 "params": { 00:14:12.380 "name": "Nvme1", 00:14:12.380 "trtype": "tcp", 00:14:12.380 "traddr": "10.0.0.2", 00:14:12.380 "adrfam": "ipv4", 00:14:12.380 "trsvcid": "4420", 00:14:12.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.380 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:12.380 "hdgst": false, 00:14:12.380 "ddgst": false 00:14:12.380 }, 00:14:12.380 "method": "bdev_nvme_attach_controller" 00:14:12.380 }' 00:14:12.380 [2024-07-15 13:53:07.123719] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:14:12.380 [2024-07-15 13:53:07.123715] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:14:12.380 [2024-07-15 13:53:07.123714] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:14:12.380 [2024-07-15 13:53:07.123714] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:14:12.380 [2024-07-15 13:53:07.123820] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 13:53:07.123820] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 13:53:07.123821] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 13:53:07.123822] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:12.380 --proc-type=auto ] 00:14:12.380 --proc-type=auto ] 00:14:12.380 --proc-type=auto ] 00:14:12.380 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.692 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.692 [2024-07-15 13:53:07.299419] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.692 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.692 [2024-07-15 13:53:07.399767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:12.692 [2024-07-15 13:53:07.400182] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.692 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.692 [2024-07-15 13:53:07.503058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:14:12.692 [2024-07-15 13:53:07.509148] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.988 [2024-07-15 13:53:07.610385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:12.988 [2024-07-15 13:53:07.617001] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.988 [2024-07-15 13:53:07.715337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:12.988 Running I/O for 1 seconds... 00:14:13.246 Running I/O for 1 seconds... 00:14:13.246 Running I/O for 1 seconds... 00:14:13.246 Running I/O for 1 seconds... 00:14:14.184 00:14:14.184 Latency(us) 00:14:14.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.184 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:14.184 Nvme1n1 : 1.02 7213.07 28.18 0.00 0.00 17555.16 7330.32 28350.39 00:14:14.184 =================================================================================================================== 00:14:14.184 Total : 7213.07 28.18 0.00 0.00 17555.16 7330.32 28350.39 00:14:14.184 00:14:14.184 Latency(us) 00:14:14.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.184 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:14.184 Nvme1n1 : 1.00 199898.43 780.85 0.00 0.00 637.86 265.48 892.02 00:14:14.184 =================================================================================================================== 00:14:14.184 Total : 199898.43 780.85 0.00 0.00 637.86 265.48 892.02 00:14:14.184 00:14:14.184 Latency(us) 00:14:14.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.184 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:14.184 Nvme1n1 : 1.01 7193.51 28.10 0.00 0.00 17739.28 5364.24 36700.16 00:14:14.184 =================================================================================================================== 00:14:14.184 Total : 7193.51 28.10 0.00 0.00 17739.28 5364.24 36700.16 00:14:14.444 00:14:14.444 Latency(us) 00:14:14.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.444 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:14.444 Nvme1n1 : 1.01 9146.69 35.73 0.00 0.00 13930.69 8543.95 25049.32 00:14:14.444 =================================================================================================================== 00:14:14.444 Total : 9146.69 35.73 0.00 0.00 13930.69 8543.95 25049.32 00:14:14.444 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3724675 00:14:14.704 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3724677 00:14:14.704 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3724680 00:14:14.704 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:14.704 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.704 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:14.704 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.704 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:14.704 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:14.704 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:14.704 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:14:14.704 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:14.704 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:14:14.704 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:14.704 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:14.704 rmmod nvme_tcp 00:14:14.704 rmmod nvme_fabrics 00:14:14.704 rmmod nvme_keyring 00:14:14.704 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:14.704 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:14:14.704 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:14:14.704 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3724522 ']' 00:14:14.704 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3724522 00:14:14.704 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 3724522 ']' 00:14:14.704 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 3724522 00:14:14.704 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:14:14.704 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:14.704 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3724522 00:14:14.704 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:14.704 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:14.704 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3724522' 00:14:14.704 killing process with pid 3724522 00:14:14.704 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 3724522 00:14:14.704 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 3724522 00:14:14.962 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:14.962 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:14.962 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:14.962 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:14.962 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:14.962 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.962 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:14.962 13:53:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.499 13:53:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:17.499 00:14:17.499 real 0m8.142s 00:14:17.499 user 0m20.820s 00:14:17.499 sys 0m3.396s 00:14:17.499 13:53:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:17.499 13:53:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:17.499 ************************************ 00:14:17.499 END TEST nvmf_bdev_io_wait 00:14:17.499 ************************************ 00:14:17.499 13:53:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:17.499 13:53:11 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:17.499 13:53:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:17.499 13:53:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:17.499 13:53:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:17.499 ************************************ 00:14:17.499 START TEST nvmf_queue_depth 00:14:17.499 ************************************ 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:17.499 * Looking for test storage... 00:14:17.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:14:17.499 13:53:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:19.402 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:19.402 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:19.402 Found net devices under 0000:84:00.0: cvl_0_0 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:19.402 Found net devices under 0000:84:00.1: cvl_0_1 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:19.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:19.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:14:19.402 00:14:19.402 --- 10.0.0.2 ping statistics --- 00:14:19.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.402 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:19.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:19.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:14:19.402 00:14:19.402 --- 10.0.0.1 ping statistics --- 00:14:19.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.402 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3726919 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3726919 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3726919 ']' 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:19.402 13:53:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:19.402 [2024-07-15 13:53:13.942283] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:14:19.402 [2024-07-15 13:53:13.942365] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.402 EAL: No free 2048 kB hugepages reported on node 1 00:14:19.402 [2024-07-15 13:53:14.006000] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.402 [2024-07-15 13:53:14.107767] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.402 [2024-07-15 13:53:14.107841] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.402 [2024-07-15 13:53:14.107869] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:19.402 [2024-07-15 13:53:14.107880] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:19.402 [2024-07-15 13:53:14.107889] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.402 [2024-07-15 13:53:14.107929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.402 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:19.402 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:19.402 13:53:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:19.402 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:19.402 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:19.660 [2024-07-15 13:53:14.247970] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:19.660 Malloc0 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:19.660 [2024-07-15 13:53:14.307423] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3727038 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3727038 /var/tmp/bdevperf.sock 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3727038 ']' 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:19.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:19.660 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:19.660 [2024-07-15 13:53:14.351429] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:14:19.661 [2024-07-15 13:53:14.351506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3727038 ] 00:14:19.661 EAL: No free 2048 kB hugepages reported on node 1 00:14:19.661 [2024-07-15 13:53:14.408697] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.918 [2024-07-15 13:53:14.517101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.918 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:19.918 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:19.918 13:53:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:19.918 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.918 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:20.176 NVMe0n1 00:14:20.176 13:53:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.176 13:53:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:20.176 Running I/O for 10 seconds... 00:14:32.388 00:14:32.388 Latency(us) 00:14:32.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.388 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:32.388 Verification LBA range: start 0x0 length 0x4000 00:14:32.388 NVMe0n1 : 10.07 9848.84 38.47 0.00 0.00 103595.19 20486.07 65244.73 00:14:32.388 =================================================================================================================== 00:14:32.388 Total : 9848.84 38.47 0.00 0.00 103595.19 20486.07 65244.73 00:14:32.388 0 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3727038 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3727038 ']' 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3727038 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3727038 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3727038' 00:14:32.388 killing process with pid 3727038 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3727038 00:14:32.388 Received shutdown signal, test time was about 10.000000 seconds 00:14:32.388 00:14:32.388 Latency(us) 00:14:32.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.388 =================================================================================================================== 00:14:32.388 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3727038 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:32.388 rmmod nvme_tcp 00:14:32.388 rmmod nvme_fabrics 00:14:32.388 rmmod nvme_keyring 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3726919 ']' 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3726919 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3726919 ']' 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3726919 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3726919 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3726919' 00:14:32.388 killing process with pid 3726919 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3726919 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3726919 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:32.388 13:53:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.954 13:53:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:32.954 00:14:32.954 real 0m15.942s 00:14:32.954 user 0m22.318s 00:14:32.954 sys 0m3.208s 00:14:32.954 13:53:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:32.954 13:53:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:32.954 ************************************ 00:14:32.954 END TEST nvmf_queue_depth 00:14:32.954 ************************************ 00:14:32.954 13:53:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:32.954 13:53:27 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:32.954 13:53:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:32.954 13:53:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:32.954 13:53:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:32.954 ************************************ 00:14:32.954 START TEST nvmf_target_multipath 00:14:32.954 ************************************ 00:14:32.954 13:53:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:33.212 * Looking for test storage... 00:14:33.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:14:33.212 13:53:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:35.117 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:35.117 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:14:35.117 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:35.117 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:35.117 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:35.118 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:35.118 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:35.118 Found net devices under 0000:84:00.0: cvl_0_0 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:35.118 Found net devices under 0000:84:00.1: cvl_0_1 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:35.118 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:35.377 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:35.377 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:35.377 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:35.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:35.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:14:35.377 00:14:35.377 --- 10.0.0.2 ping statistics --- 00:14:35.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.377 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:14:35.377 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:35.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:35.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:14:35.377 00:14:35.377 --- 10.0.0.1 ping statistics --- 00:14:35.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.377 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:14:35.377 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:35.377 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:14:35.377 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:35.377 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:35.377 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:35.377 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:35.377 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:35.377 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:35.377 13:53:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:35.377 13:53:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:14:35.377 13:53:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:14:35.377 only one NIC for nvmf test 00:14:35.377 13:53:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:14:35.377 13:53:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:35.377 13:53:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:35.377 13:53:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:35.377 13:53:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:35.377 13:53:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:35.377 13:53:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:35.377 rmmod nvme_tcp 00:14:35.377 rmmod nvme_fabrics 00:14:35.377 rmmod nvme_keyring 00:14:35.377 13:53:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:35.377 13:53:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:35.377 13:53:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:35.377 13:53:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:35.377 13:53:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:35.377 13:53:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:35.377 13:53:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:35.377 13:53:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:35.377 13:53:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:35.377 13:53:30 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.377 13:53:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.377 13:53:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.281 13:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:37.281 13:53:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:14:37.281 13:53:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:14:37.281 13:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:37.281 13:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:37.539 13:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:37.539 13:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:37.539 13:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:37.539 13:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:37.539 13:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:37.539 13:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:37.539 13:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:37.539 13:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:37.539 13:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:37.539 13:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:37.539 13:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:37.539 13:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:37.539 13:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:37.539 13:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.539 13:53:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:37.539 13:53:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.539 13:53:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:37.539 00:14:37.539 real 0m4.373s 00:14:37.539 user 0m0.800s 00:14:37.539 sys 0m1.547s 00:14:37.539 13:53:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:37.539 13:53:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:37.539 ************************************ 00:14:37.539 END TEST nvmf_target_multipath 00:14:37.539 ************************************ 00:14:37.539 13:53:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:37.539 13:53:32 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:37.539 13:53:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:37.539 13:53:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:37.539 13:53:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:37.539 ************************************ 00:14:37.539 START TEST nvmf_zcopy 00:14:37.539 ************************************ 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:37.539 * Looking for test storage... 00:14:37.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:14:37.539 13:53:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:39.450 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:39.450 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:39.450 Found net devices under 0000:84:00.0: cvl_0_0 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:39.450 Found net devices under 0000:84:00.1: cvl_0_1 00:14:39.450 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:39.451 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:39.451 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:14:39.451 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:39.451 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:39.451 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:39.451 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:39.451 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:39.451 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:39.451 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:39.451 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:39.451 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:39.451 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:39.451 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:39.451 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:39.451 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:39.451 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:39.451 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:39.451 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:39.451 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:39.451 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:39.451 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:39.451 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:39.714 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:39.714 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:39.714 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:39.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:39.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:14:39.714 00:14:39.714 --- 10.0.0.2 ping statistics --- 00:14:39.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.714 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:14:39.714 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:39.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:39.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:14:39.714 00:14:39.714 --- 10.0.0.1 ping statistics --- 00:14:39.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.714 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:14:39.714 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:39.714 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:14:39.714 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:39.714 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:39.714 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:39.714 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:39.714 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:39.714 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:39.714 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:39.714 13:53:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:39.714 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:39.714 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:39.714 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:39.714 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3732144 00:14:39.714 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:39.714 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3732144 00:14:39.714 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 3732144 ']' 00:14:39.714 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.714 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:39.714 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.714 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:39.714 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:39.714 [2024-07-15 13:53:34.388246] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:14:39.714 [2024-07-15 13:53:34.388323] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:39.714 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.714 [2024-07-15 13:53:34.450798] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.971 [2024-07-15 13:53:34.559436] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:39.971 [2024-07-15 13:53:34.559486] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:39.971 [2024-07-15 13:53:34.559515] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:39.971 [2024-07-15 13:53:34.559526] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:39.971 [2024-07-15 13:53:34.559536] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:39.971 [2024-07-15 13:53:34.559568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:39.971 [2024-07-15 13:53:34.703435] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:39.971 [2024-07-15 13:53:34.719612] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:39.971 malloc0 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:39.971 { 00:14:39.971 "params": { 00:14:39.971 "name": "Nvme$subsystem", 00:14:39.971 "trtype": "$TEST_TRANSPORT", 00:14:39.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:39.971 "adrfam": "ipv4", 00:14:39.971 "trsvcid": "$NVMF_PORT", 00:14:39.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:39.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:39.971 "hdgst": ${hdgst:-false}, 00:14:39.971 "ddgst": ${ddgst:-false} 00:14:39.971 }, 00:14:39.971 "method": "bdev_nvme_attach_controller" 00:14:39.971 } 00:14:39.971 EOF 00:14:39.971 )") 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:39.971 13:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:39.971 "params": { 00:14:39.971 "name": "Nvme1", 00:14:39.971 "trtype": "tcp", 00:14:39.971 "traddr": "10.0.0.2", 00:14:39.971 "adrfam": "ipv4", 00:14:39.971 "trsvcid": "4420", 00:14:39.971 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:39.971 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:39.971 "hdgst": false, 00:14:39.971 "ddgst": false 00:14:39.971 }, 00:14:39.971 "method": "bdev_nvme_attach_controller" 00:14:39.971 }' 00:14:39.971 [2024-07-15 13:53:34.800838] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:14:39.972 [2024-07-15 13:53:34.800909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3732172 ] 00:14:40.229 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.229 [2024-07-15 13:53:34.863352] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.229 [2024-07-15 13:53:34.973842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.488 Running I/O for 10 seconds... 00:14:50.482 00:14:50.482 Latency(us) 00:14:50.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.482 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:50.482 Verification LBA range: start 0x0 length 0x1000 00:14:50.482 Nvme1n1 : 10.01 6461.59 50.48 0.00 0.00 19758.34 2852.03 30486.38 00:14:50.482 =================================================================================================================== 00:14:50.482 Total : 6461.59 50.48 0.00 0.00 19758.34 2852.03 30486.38 00:14:50.740 13:53:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3733483 00:14:50.740 13:53:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:50.740 13:53:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:50.740 13:53:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:50.740 13:53:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:50.740 13:53:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:50.740 13:53:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:50.740 13:53:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:50.740 13:53:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:50.740 { 00:14:50.740 "params": { 00:14:50.740 "name": "Nvme$subsystem", 00:14:50.740 "trtype": "$TEST_TRANSPORT", 00:14:50.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:50.740 "adrfam": "ipv4", 00:14:50.740 "trsvcid": "$NVMF_PORT", 00:14:50.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:50.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:50.740 "hdgst": ${hdgst:-false}, 00:14:50.740 "ddgst": ${ddgst:-false} 00:14:50.740 }, 00:14:50.740 "method": "bdev_nvme_attach_controller" 00:14:50.740 } 00:14:50.740 EOF 00:14:50.740 )") 00:14:50.740 13:53:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:50.740 [2024-07-15 13:53:45.514896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.740 [2024-07-15 13:53:45.514944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.740 13:53:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:50.740 13:53:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:50.740 13:53:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:50.740 "params": { 00:14:50.740 "name": "Nvme1", 00:14:50.740 "trtype": "tcp", 00:14:50.740 "traddr": "10.0.0.2", 00:14:50.740 "adrfam": "ipv4", 00:14:50.740 "trsvcid": "4420", 00:14:50.740 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:50.740 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:50.740 "hdgst": false, 00:14:50.740 "ddgst": false 00:14:50.740 }, 00:14:50.740 "method": "bdev_nvme_attach_controller" 00:14:50.740 }' 00:14:50.740 [2024-07-15 13:53:45.522839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.740 [2024-07-15 13:53:45.522862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.740 [2024-07-15 13:53:45.530858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.740 [2024-07-15 13:53:45.530880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.740 [2024-07-15 13:53:45.538878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.740 [2024-07-15 13:53:45.538900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.740 [2024-07-15 13:53:45.546901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.740 [2024-07-15 13:53:45.546922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.740 [2024-07-15 13:53:45.554567] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:14:50.740 [2024-07-15 13:53:45.554638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3733483 ] 00:14:50.740 [2024-07-15 13:53:45.554925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.740 [2024-07-15 13:53:45.554946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.740 [2024-07-15 13:53:45.562959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.740 [2024-07-15 13:53:45.562981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.740 [2024-07-15 13:53:45.570967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.740 [2024-07-15 13:53:45.570988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.740 [2024-07-15 13:53:45.578990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.740 [2024-07-15 13:53:45.579022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.999 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.000 [2024-07-15 13:53:45.587038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.587059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.595061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.595097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.603097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.603117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.611114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.611134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.616175] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.000 [2024-07-15 13:53:45.619141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.619162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.627202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.627241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.635182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.635206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.643198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.643218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.651217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.651236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.659238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.659258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.667259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.667279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.675287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.675317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.683345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.683381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.691354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.691386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.699351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.699371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.707370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.707389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.715392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.715413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.723414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.723434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.731435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.731454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.734777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.000 [2024-07-15 13:53:45.739457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.739477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.747485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.747505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.755541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.755579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.763568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.763605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.771596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.771636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.779614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.779655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.787633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.787672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.795655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.795694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.803646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.803671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.811696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.811753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.819743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.819797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.827763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.827803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.000 [2024-07-15 13:53:45.835746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.000 [2024-07-15 13:53:45.835768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.258 [2024-07-15 13:53:45.843768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.258 [2024-07-15 13:53:45.843806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.258 [2024-07-15 13:53:45.851801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.258 [2024-07-15 13:53:45.851822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.258 [2024-07-15 13:53:45.859834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.258 [2024-07-15 13:53:45.859860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.258 [2024-07-15 13:53:45.867851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.258 [2024-07-15 13:53:45.867876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.258 [2024-07-15 13:53:45.875874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.258 [2024-07-15 13:53:45.875898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.258 [2024-07-15 13:53:45.883892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.258 [2024-07-15 13:53:45.883916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.258 [2024-07-15 13:53:45.891912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.258 [2024-07-15 13:53:45.891935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.258 [2024-07-15 13:53:45.899933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.259 [2024-07-15 13:53:45.899958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.259 [2024-07-15 13:53:45.907953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.259 [2024-07-15 13:53:45.907975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.259 [2024-07-15 13:53:45.915981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.259 [2024-07-15 13:53:45.916006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.259 [2024-07-15 13:53:45.924002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.259 [2024-07-15 13:53:45.924041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.259 Running I/O for 5 seconds... 00:14:51.259 [2024-07-15 13:53:45.932035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.259 [2024-07-15 13:53:45.932058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.259 [2024-07-15 13:53:45.946037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.259 [2024-07-15 13:53:45.946063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.259 [2024-07-15 13:53:45.956788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.259 [2024-07-15 13:53:45.956815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.259 [2024-07-15 13:53:45.968845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.259 [2024-07-15 13:53:45.968872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.259 [2024-07-15 13:53:45.978519] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.259 [2024-07-15 13:53:45.978544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.259 [2024-07-15 13:53:45.988902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.259 [2024-07-15 13:53:45.988935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.259 [2024-07-15 13:53:45.999137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.259 [2024-07-15 13:53:45.999162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.259 [2024-07-15 13:53:46.009264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.259 [2024-07-15 13:53:46.009289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.259 [2024-07-15 13:53:46.020009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.259 [2024-07-15 13:53:46.020048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.259 [2024-07-15 13:53:46.030388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.259 [2024-07-15 13:53:46.030412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.259 [2024-07-15 13:53:46.040651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.259 [2024-07-15 13:53:46.040676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.259 [2024-07-15 13:53:46.051287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.259 [2024-07-15 13:53:46.051312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.259 [2024-07-15 13:53:46.063338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.259 [2024-07-15 13:53:46.063363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.259 [2024-07-15 13:53:46.072431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.259 [2024-07-15 13:53:46.072455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.259 [2024-07-15 13:53:46.082908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.259 [2024-07-15 13:53:46.082935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.259 [2024-07-15 13:53:46.093276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.259 [2024-07-15 13:53:46.093301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.543 [2024-07-15 13:53:46.106265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.543 [2024-07-15 13:53:46.106294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.543 [2024-07-15 13:53:46.117469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.543 [2024-07-15 13:53:46.117496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.543 [2024-07-15 13:53:46.128327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.543 [2024-07-15 13:53:46.128351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.543 [2024-07-15 13:53:46.140933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.543 [2024-07-15 13:53:46.140961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.543 [2024-07-15 13:53:46.150966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.543 [2024-07-15 13:53:46.150994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.543 [2024-07-15 13:53:46.160803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.543 [2024-07-15 13:53:46.160831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.543 [2024-07-15 13:53:46.171676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.543 [2024-07-15 13:53:46.171700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.543 [2024-07-15 13:53:46.183731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.543 [2024-07-15 13:53:46.183768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.543 [2024-07-15 13:53:46.193222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.543 [2024-07-15 13:53:46.193254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.543 [2024-07-15 13:53:46.205270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.543 [2024-07-15 13:53:46.205294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.543 [2024-07-15 13:53:46.217006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.543 [2024-07-15 13:53:46.217047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.543 [2024-07-15 13:53:46.226187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.543 [2024-07-15 13:53:46.226211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.543 [2024-07-15 13:53:46.236516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.543 [2024-07-15 13:53:46.236539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.543 [2024-07-15 13:53:46.246758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.543 [2024-07-15 13:53:46.246784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.543 [2024-07-15 13:53:46.257387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.543 [2024-07-15 13:53:46.257411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.543 [2024-07-15 13:53:46.267474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.543 [2024-07-15 13:53:46.267498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.543 [2024-07-15 13:53:46.277811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.543 [2024-07-15 13:53:46.277838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.543 [2024-07-15 13:53:46.288042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.543 [2024-07-15 13:53:46.288066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.543 [2024-07-15 13:53:46.298769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.543 [2024-07-15 13:53:46.298809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.543 [2024-07-15 13:53:46.309333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.543 [2024-07-15 13:53:46.309356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.543 [2024-07-15 13:53:46.319346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.543 [2024-07-15 13:53:46.319370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.543 [2024-07-15 13:53:46.329781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.543 [2024-07-15 13:53:46.329807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.543 [2024-07-15 13:53:46.342398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.543 [2024-07-15 13:53:46.342423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.543 [2024-07-15 13:53:46.352448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.543 [2024-07-15 13:53:46.352473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.543 [2024-07-15 13:53:46.363276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.543 [2024-07-15 13:53:46.363301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-15 13:53:46.374106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-15 13:53:46.374132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-15 13:53:46.386478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-15 13:53:46.386503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-15 13:53:46.396911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-15 13:53:46.396945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-15 13:53:46.407378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-15 13:53:46.407401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-15 13:53:46.417840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-15 13:53:46.417865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-15 13:53:46.430014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-15 13:53:46.430053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-15 13:53:46.439432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-15 13:53:46.439456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-15 13:53:46.449802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-15 13:53:46.449828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-15 13:53:46.461735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-15 13:53:46.461768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-15 13:53:46.470634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-15 13:53:46.470658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-15 13:53:46.481457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-15 13:53:46.481481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-15 13:53:46.491907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-15 13:53:46.491932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-15 13:53:46.502255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-15 13:53:46.502280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-15 13:53:46.512449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-15 13:53:46.512472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-15 13:53:46.523242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-15 13:53:46.523266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-15 13:53:46.533487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-15 13:53:46.533510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-15 13:53:46.544045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-15 13:53:46.544070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-15 13:53:46.556833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-15 13:53:46.556858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-15 13:53:46.568032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-15 13:53:46.568057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-15 13:53:46.576797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-15 13:53:46.576824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-15 13:53:46.587702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-15 13:53:46.587749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-15 13:53:46.600204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-15 13:53:46.600228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-15 13:53:46.609699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-15 13:53:46.609748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-15 13:53:46.619642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-15 13:53:46.619665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-15 13:53:46.629612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-15 13:53:46.629636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-15 13:53:46.640344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-15 13:53:46.640368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.063 [2024-07-15 13:53:46.652842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.063 [2024-07-15 13:53:46.652869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.063 [2024-07-15 13:53:46.662525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.063 [2024-07-15 13:53:46.662549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.063 [2024-07-15 13:53:46.672656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.063 [2024-07-15 13:53:46.672680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.063 [2024-07-15 13:53:46.682644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.063 [2024-07-15 13:53:46.682668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.063 [2024-07-15 13:53:46.693230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.063 [2024-07-15 13:53:46.693255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.063 [2024-07-15 13:53:46.706284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.063 [2024-07-15 13:53:46.706308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.063 [2024-07-15 13:53:46.716222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.063 [2024-07-15 13:53:46.716246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.063 [2024-07-15 13:53:46.726437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.063 [2024-07-15 13:53:46.726462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.063 [2024-07-15 13:53:46.736325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.063 [2024-07-15 13:53:46.736349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.063 [2024-07-15 13:53:46.746186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.063 [2024-07-15 13:53:46.746210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.063 [2024-07-15 13:53:46.756503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.063 [2024-07-15 13:53:46.756528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.063 [2024-07-15 13:53:46.767173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.063 [2024-07-15 13:53:46.767197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.063 [2024-07-15 13:53:46.777828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.063 [2024-07-15 13:53:46.777854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.063 [2024-07-15 13:53:46.789700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.063 [2024-07-15 13:53:46.789745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.064 [2024-07-15 13:53:46.799359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.064 [2024-07-15 13:53:46.799383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.064 [2024-07-15 13:53:46.810324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.064 [2024-07-15 13:53:46.810348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.064 [2024-07-15 13:53:46.820755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.064 [2024-07-15 13:53:46.820779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.064 [2024-07-15 13:53:46.831134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.064 [2024-07-15 13:53:46.831159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.064 [2024-07-15 13:53:46.841907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.064 [2024-07-15 13:53:46.841934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.064 [2024-07-15 13:53:46.852465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.064 [2024-07-15 13:53:46.852489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.064 [2024-07-15 13:53:46.862781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.064 [2024-07-15 13:53:46.862807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.064 [2024-07-15 13:53:46.872984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.064 [2024-07-15 13:53:46.873026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.064 [2024-07-15 13:53:46.882901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.064 [2024-07-15 13:53:46.882927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.064 [2024-07-15 13:53:46.893266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.064 [2024-07-15 13:53:46.893290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.322 [2024-07-15 13:53:46.903794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.322 [2024-07-15 13:53:46.903822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.322 [2024-07-15 13:53:46.914343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.322 [2024-07-15 13:53:46.914368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.322 [2024-07-15 13:53:46.926777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.322 [2024-07-15 13:53:46.926817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.322 [2024-07-15 13:53:46.936070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.322 [2024-07-15 13:53:46.936108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.322 [2024-07-15 13:53:46.946933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.322 [2024-07-15 13:53:46.946959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.322 [2024-07-15 13:53:46.957180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.322 [2024-07-15 13:53:46.957204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.322 [2024-07-15 13:53:46.967641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.322 [2024-07-15 13:53:46.967680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.322 [2024-07-15 13:53:46.977961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.322 [2024-07-15 13:53:46.977986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.322 [2024-07-15 13:53:46.988525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.322 [2024-07-15 13:53:46.988551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.322 [2024-07-15 13:53:46.998923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.322 [2024-07-15 13:53:46.998951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.322 [2024-07-15 13:53:47.009585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.322 [2024-07-15 13:53:47.009611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.322 [2024-07-15 13:53:47.021366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.322 [2024-07-15 13:53:47.021391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.322 [2024-07-15 13:53:47.030789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.322 [2024-07-15 13:53:47.030816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.322 [2024-07-15 13:53:47.040851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.322 [2024-07-15 13:53:47.040878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.322 [2024-07-15 13:53:47.051085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.322 [2024-07-15 13:53:47.051109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.322 [2024-07-15 13:53:47.061593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.322 [2024-07-15 13:53:47.061618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.322 [2024-07-15 13:53:47.071771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.322 [2024-07-15 13:53:47.071796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.322 [2024-07-15 13:53:47.082004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.322 [2024-07-15 13:53:47.082045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.322 [2024-07-15 13:53:47.091753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.322 [2024-07-15 13:53:47.091779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.322 [2024-07-15 13:53:47.101838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.322 [2024-07-15 13:53:47.101863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.322 [2024-07-15 13:53:47.112011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.322 [2024-07-15 13:53:47.112051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.322 [2024-07-15 13:53:47.122144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.322 [2024-07-15 13:53:47.122168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.322 [2024-07-15 13:53:47.132265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.322 [2024-07-15 13:53:47.132290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.322 [2024-07-15 13:53:47.142494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.322 [2024-07-15 13:53:47.142519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.322 [2024-07-15 13:53:47.152847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.322 [2024-07-15 13:53:47.152872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.580 [2024-07-15 13:53:47.163600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.580 [2024-07-15 13:53:47.163626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.580 [2024-07-15 13:53:47.173708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.580 [2024-07-15 13:53:47.173757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.580 [2024-07-15 13:53:47.183688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.580 [2024-07-15 13:53:47.183752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.580 [2024-07-15 13:53:47.193646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.580 [2024-07-15 13:53:47.193670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.580 [2024-07-15 13:53:47.203915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.580 [2024-07-15 13:53:47.203941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.580 [2024-07-15 13:53:47.215555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.580 [2024-07-15 13:53:47.215580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.580 [2024-07-15 13:53:47.225311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.580 [2024-07-15 13:53:47.225336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.580 [2024-07-15 13:53:47.236322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.580 [2024-07-15 13:53:47.236347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.580 [2024-07-15 13:53:47.247325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.580 [2024-07-15 13:53:47.247349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.580 [2024-07-15 13:53:47.257581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.580 [2024-07-15 13:53:47.257606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.580 [2024-07-15 13:53:47.268330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.580 [2024-07-15 13:53:47.268355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.580 [2024-07-15 13:53:47.280384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.580 [2024-07-15 13:53:47.280408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.580 [2024-07-15 13:53:47.290435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.580 [2024-07-15 13:53:47.290460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.580 [2024-07-15 13:53:47.300565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.580 [2024-07-15 13:53:47.300589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.580 [2024-07-15 13:53:47.310816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.580 [2024-07-15 13:53:47.310843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.580 [2024-07-15 13:53:47.321539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.580 [2024-07-15 13:53:47.321563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.580 [2024-07-15 13:53:47.331778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.580 [2024-07-15 13:53:47.331805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.580 [2024-07-15 13:53:47.342430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.580 [2024-07-15 13:53:47.342454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.580 [2024-07-15 13:53:47.352982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.580 [2024-07-15 13:53:47.353008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.580 [2024-07-15 13:53:47.365212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.580 [2024-07-15 13:53:47.365236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.580 [2024-07-15 13:53:47.374366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.580 [2024-07-15 13:53:47.374390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.581 [2024-07-15 13:53:47.385046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.581 [2024-07-15 13:53:47.385078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.581 [2024-07-15 13:53:47.396812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.581 [2024-07-15 13:53:47.396838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.581 [2024-07-15 13:53:47.405976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.581 [2024-07-15 13:53:47.406001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.581 [2024-07-15 13:53:47.417138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.581 [2024-07-15 13:53:47.417164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.840 [2024-07-15 13:53:47.429655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.840 [2024-07-15 13:53:47.429680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.840 [2024-07-15 13:53:47.439439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.840 [2024-07-15 13:53:47.439463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.840 [2024-07-15 13:53:47.449858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.840 [2024-07-15 13:53:47.449883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.840 [2024-07-15 13:53:47.460411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.840 [2024-07-15 13:53:47.460435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.840 [2024-07-15 13:53:47.472499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.840 [2024-07-15 13:53:47.472523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.840 [2024-07-15 13:53:47.482262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.840 [2024-07-15 13:53:47.482286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.840 [2024-07-15 13:53:47.492884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.840 [2024-07-15 13:53:47.492910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.840 [2024-07-15 13:53:47.503341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.840 [2024-07-15 13:53:47.503366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.840 [2024-07-15 13:53:47.513589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.840 [2024-07-15 13:53:47.513613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.840 [2024-07-15 13:53:47.524137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.840 [2024-07-15 13:53:47.524162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.840 [2024-07-15 13:53:47.534874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.840 [2024-07-15 13:53:47.534901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.840 [2024-07-15 13:53:47.545332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.840 [2024-07-15 13:53:47.545355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.840 [2024-07-15 13:53:47.555901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.840 [2024-07-15 13:53:47.555927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.840 [2024-07-15 13:53:47.565962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.840 [2024-07-15 13:53:47.565988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.840 [2024-07-15 13:53:47.576907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.840 [2024-07-15 13:53:47.576932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.840 [2024-07-15 13:53:47.586929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.840 [2024-07-15 13:53:47.586963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.840 [2024-07-15 13:53:47.597559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.840 [2024-07-15 13:53:47.597583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.840 [2024-07-15 13:53:47.609220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.840 [2024-07-15 13:53:47.609244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.840 [2024-07-15 13:53:47.617964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.840 [2024-07-15 13:53:47.617990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.840 [2024-07-15 13:53:47.628776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.840 [2024-07-15 13:53:47.628802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.840 [2024-07-15 13:53:47.638825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.840 [2024-07-15 13:53:47.638850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.840 [2024-07-15 13:53:47.649317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.840 [2024-07-15 13:53:47.649341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.840 [2024-07-15 13:53:47.661938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.840 [2024-07-15 13:53:47.661963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.840 [2024-07-15 13:53:47.672154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.840 [2024-07-15 13:53:47.672177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.101 [2024-07-15 13:53:47.682463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.101 [2024-07-15 13:53:47.682504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.101 [2024-07-15 13:53:47.692637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.101 [2024-07-15 13:53:47.692661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.101 [2024-07-15 13:53:47.702899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.101 [2024-07-15 13:53:47.702924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.101 [2024-07-15 13:53:47.712851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.101 [2024-07-15 13:53:47.712877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.101 [2024-07-15 13:53:47.723322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.101 [2024-07-15 13:53:47.723346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.101 [2024-07-15 13:53:47.733483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.101 [2024-07-15 13:53:47.733507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.101 [2024-07-15 13:53:47.744069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.101 [2024-07-15 13:53:47.744094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.101 [2024-07-15 13:53:47.754249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.101 [2024-07-15 13:53:47.754274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.101 [2024-07-15 13:53:47.764770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.101 [2024-07-15 13:53:47.764809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.101 [2024-07-15 13:53:47.775040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.101 [2024-07-15 13:53:47.775065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.101 [2024-07-15 13:53:47.785326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.101 [2024-07-15 13:53:47.785357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.101 [2024-07-15 13:53:47.796106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.101 [2024-07-15 13:53:47.796131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.101 [2024-07-15 13:53:47.806533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.101 [2024-07-15 13:53:47.806556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.101 [2024-07-15 13:53:47.816606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.101 [2024-07-15 13:53:47.816630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.101 [2024-07-15 13:53:47.827260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.101 [2024-07-15 13:53:47.827285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.101 [2024-07-15 13:53:47.839375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.101 [2024-07-15 13:53:47.839399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.101 [2024-07-15 13:53:47.850981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.101 [2024-07-15 13:53:47.851007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.101 [2024-07-15 13:53:47.859788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.101 [2024-07-15 13:53:47.859813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.101 [2024-07-15 13:53:47.870936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.101 [2024-07-15 13:53:47.870962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.101 [2024-07-15 13:53:47.883067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.101 [2024-07-15 13:53:47.883106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.101 [2024-07-15 13:53:47.893412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.101 [2024-07-15 13:53:47.893436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.101 [2024-07-15 13:53:47.903920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.101 [2024-07-15 13:53:47.903946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.101 [2024-07-15 13:53:47.916421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.101 [2024-07-15 13:53:47.916445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.101 [2024-07-15 13:53:47.926439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.101 [2024-07-15 13:53:47.926463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.101 [2024-07-15 13:53:47.936887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.101 [2024-07-15 13:53:47.936913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.361 [2024-07-15 13:53:47.947671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.361 [2024-07-15 13:53:47.947696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.361 [2024-07-15 13:53:47.957933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.361 [2024-07-15 13:53:47.957958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.361 [2024-07-15 13:53:47.968167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.361 [2024-07-15 13:53:47.968191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.361 [2024-07-15 13:53:47.980856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.361 [2024-07-15 13:53:47.980882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.361 [2024-07-15 13:53:47.990347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.361 [2024-07-15 13:53:47.990377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.361 [2024-07-15 13:53:48.000379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.361 [2024-07-15 13:53:48.000403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.361 [2024-07-15 13:53:48.010842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.361 [2024-07-15 13:53:48.010869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.361 [2024-07-15 13:53:48.023360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.361 [2024-07-15 13:53:48.023385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.361 [2024-07-15 13:53:48.033432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.361 [2024-07-15 13:53:48.033456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.361 [2024-07-15 13:53:48.043512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.361 [2024-07-15 13:53:48.043536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.361 [2024-07-15 13:53:48.053709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.361 [2024-07-15 13:53:48.053756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.361 [2024-07-15 13:53:48.064115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.361 [2024-07-15 13:53:48.064139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.361 [2024-07-15 13:53:48.074303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.361 [2024-07-15 13:53:48.074327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.361 [2024-07-15 13:53:48.086900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.361 [2024-07-15 13:53:48.086925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.361 [2024-07-15 13:53:48.097165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.361 [2024-07-15 13:53:48.097189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.361 [2024-07-15 13:53:48.107586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.361 [2024-07-15 13:53:48.107609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.361 [2024-07-15 13:53:48.117808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.361 [2024-07-15 13:53:48.117834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.361 [2024-07-15 13:53:48.128120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.361 [2024-07-15 13:53:48.128144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.361 [2024-07-15 13:53:48.138910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.361 [2024-07-15 13:53:48.138936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.361 [2024-07-15 13:53:48.149537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.361 [2024-07-15 13:53:48.149561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.361 [2024-07-15 13:53:48.159881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.361 [2024-07-15 13:53:48.159906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.361 [2024-07-15 13:53:48.172287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.361 [2024-07-15 13:53:48.172311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.361 [2024-07-15 13:53:48.181956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.361 [2024-07-15 13:53:48.181982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.361 [2024-07-15 13:53:48.194019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.361 [2024-07-15 13:53:48.194059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.620 [2024-07-15 13:53:48.203633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.620 [2024-07-15 13:53:48.203660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.620 [2024-07-15 13:53:48.214515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.620 [2024-07-15 13:53:48.214539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.620 [2024-07-15 13:53:48.225223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.620 [2024-07-15 13:53:48.225248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.620 [2024-07-15 13:53:48.237579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.620 [2024-07-15 13:53:48.237603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.620 [2024-07-15 13:53:48.247327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.620 [2024-07-15 13:53:48.247350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.620 [2024-07-15 13:53:48.257575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.620 [2024-07-15 13:53:48.257598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.620 [2024-07-15 13:53:48.267872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.620 [2024-07-15 13:53:48.267898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.620 [2024-07-15 13:53:48.278334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.620 [2024-07-15 13:53:48.278358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.620 [2024-07-15 13:53:48.288576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.620 [2024-07-15 13:53:48.288599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.620 [2024-07-15 13:53:48.298557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.620 [2024-07-15 13:53:48.298582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.620 [2024-07-15 13:53:48.308663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.620 [2024-07-15 13:53:48.308687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.620 [2024-07-15 13:53:48.318472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.620 [2024-07-15 13:53:48.318496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.620 [2024-07-15 13:53:48.328332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.620 [2024-07-15 13:53:48.328357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.620 [2024-07-15 13:53:48.338522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.620 [2024-07-15 13:53:48.338547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.620 [2024-07-15 13:53:48.348692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.620 [2024-07-15 13:53:48.348717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.620 [2024-07-15 13:53:48.358583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.620 [2024-07-15 13:53:48.358608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.620 [2024-07-15 13:53:48.368992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.620 [2024-07-15 13:53:48.369044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.620 [2024-07-15 13:53:48.379277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.620 [2024-07-15 13:53:48.379302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.620 [2024-07-15 13:53:48.389409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.621 [2024-07-15 13:53:48.389436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.621 [2024-07-15 13:53:48.399623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.621 [2024-07-15 13:53:48.399647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.621 [2024-07-15 13:53:48.409776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.621 [2024-07-15 13:53:48.409817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.621 [2024-07-15 13:53:48.422539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.621 [2024-07-15 13:53:48.422564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.621 [2024-07-15 13:53:48.433966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.621 [2024-07-15 13:53:48.433992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.621 [2024-07-15 13:53:48.442756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.621 [2024-07-15 13:53:48.442782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.621 [2024-07-15 13:53:48.454206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.621 [2024-07-15 13:53:48.454231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.880 [2024-07-15 13:53:48.466901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.880 [2024-07-15 13:53:48.466930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.880 [2024-07-15 13:53:48.476529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.880 [2024-07-15 13:53:48.476553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.880 [2024-07-15 13:53:48.487830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.881 [2024-07-15 13:53:48.487858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.881 [2024-07-15 13:53:48.500123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.881 [2024-07-15 13:53:48.500148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.881 [2024-07-15 13:53:48.509921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.881 [2024-07-15 13:53:48.509948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.881 [2024-07-15 13:53:48.521330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.881 [2024-07-15 13:53:48.521354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.881 [2024-07-15 13:53:48.534076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.881 [2024-07-15 13:53:48.534116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.881 [2024-07-15 13:53:48.544497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.881 [2024-07-15 13:53:48.544522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.881 [2024-07-15 13:53:48.555244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.881 [2024-07-15 13:53:48.555269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.881 [2024-07-15 13:53:48.567800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.881 [2024-07-15 13:53:48.567826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.881 [2024-07-15 13:53:48.577671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.881 [2024-07-15 13:53:48.577695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.881 [2024-07-15 13:53:48.588532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.881 [2024-07-15 13:53:48.588557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.881 [2024-07-15 13:53:48.600783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.881 [2024-07-15 13:53:48.600810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.881 [2024-07-15 13:53:48.611383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.881 [2024-07-15 13:53:48.611408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.881 [2024-07-15 13:53:48.622669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.881 [2024-07-15 13:53:48.622694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.881 [2024-07-15 13:53:48.633437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.881 [2024-07-15 13:53:48.633462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.881 [2024-07-15 13:53:48.644390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.881 [2024-07-15 13:53:48.644415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.881 [2024-07-15 13:53:48.655348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.881 [2024-07-15 13:53:48.655373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.881 [2024-07-15 13:53:48.666457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.881 [2024-07-15 13:53:48.666481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.881 [2024-07-15 13:53:48.677052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.881 [2024-07-15 13:53:48.677078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.881 [2024-07-15 13:53:48.687463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.881 [2024-07-15 13:53:48.687487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.881 [2024-07-15 13:53:48.698027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.881 [2024-07-15 13:53:48.698054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.881 [2024-07-15 13:53:48.708768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.881 [2024-07-15 13:53:48.708795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.881 [2024-07-15 13:53:48.719686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.881 [2024-07-15 13:53:48.719712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.140 [2024-07-15 13:53:48.730610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.140 [2024-07-15 13:53:48.730636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.140 [2024-07-15 13:53:48.741371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.140 [2024-07-15 13:53:48.741396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.140 [2024-07-15 13:53:48.752255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.140 [2024-07-15 13:53:48.752279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.140 [2024-07-15 13:53:48.764656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.140 [2024-07-15 13:53:48.764681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.140 [2024-07-15 13:53:48.774787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.140 [2024-07-15 13:53:48.774813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.140 [2024-07-15 13:53:48.785400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.140 [2024-07-15 13:53:48.785425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.140 [2024-07-15 13:53:48.798281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.140 [2024-07-15 13:53:48.798318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.140 [2024-07-15 13:53:48.808305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.140 [2024-07-15 13:53:48.808330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.140 [2024-07-15 13:53:48.819014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.140 [2024-07-15 13:53:48.819055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.140 [2024-07-15 13:53:48.829574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.140 [2024-07-15 13:53:48.829599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.140 [2024-07-15 13:53:48.840410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.140 [2024-07-15 13:53:48.840435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.140 [2024-07-15 13:53:48.852536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.140 [2024-07-15 13:53:48.852561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.140 [2024-07-15 13:53:48.864105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.140 [2024-07-15 13:53:48.864130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.140 [2024-07-15 13:53:48.873031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.140 [2024-07-15 13:53:48.873057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.140 [2024-07-15 13:53:48.884329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.140 [2024-07-15 13:53:48.884355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.140 [2024-07-15 13:53:48.896907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.140 [2024-07-15 13:53:48.896934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.140 [2024-07-15 13:53:48.907155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.140 [2024-07-15 13:53:48.907180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.140 [2024-07-15 13:53:48.917463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.140 [2024-07-15 13:53:48.917488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.140 [2024-07-15 13:53:48.927675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.140 [2024-07-15 13:53:48.927700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.140 [2024-07-15 13:53:48.937894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.140 [2024-07-15 13:53:48.937921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.140 [2024-07-15 13:53:48.948500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.140 [2024-07-15 13:53:48.948525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.140 [2024-07-15 13:53:48.960686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.140 [2024-07-15 13:53:48.960710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.140 [2024-07-15 13:53:48.970247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.140 [2024-07-15 13:53:48.970271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.400 [2024-07-15 13:53:48.981461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.400 [2024-07-15 13:53:48.981486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.400 [2024-07-15 13:53:48.991883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.400 [2024-07-15 13:53:48.991909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.400 [2024-07-15 13:53:49.002427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.400 [2024-07-15 13:53:49.002459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.400 [2024-07-15 13:53:49.012859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.400 [2024-07-15 13:53:49.012885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.400 [2024-07-15 13:53:49.023060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.400 [2024-07-15 13:53:49.023101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.401 [2024-07-15 13:53:49.033433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.401 [2024-07-15 13:53:49.033459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.401 [2024-07-15 13:53:49.043531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.401 [2024-07-15 13:53:49.043555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.401 [2024-07-15 13:53:49.053541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.401 [2024-07-15 13:53:49.053565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.401 [2024-07-15 13:53:49.064185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.401 [2024-07-15 13:53:49.064210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.401 [2024-07-15 13:53:49.077507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.401 [2024-07-15 13:53:49.077531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.401 [2024-07-15 13:53:49.087535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.401 [2024-07-15 13:53:49.087559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.401 [2024-07-15 13:53:49.098630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.401 [2024-07-15 13:53:49.098655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.401 [2024-07-15 13:53:49.108520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.401 [2024-07-15 13:53:49.108544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.401 [2024-07-15 13:53:49.118952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.401 [2024-07-15 13:53:49.118978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.401 [2024-07-15 13:53:49.129576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.401 [2024-07-15 13:53:49.129601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.401 [2024-07-15 13:53:49.139806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.401 [2024-07-15 13:53:49.139832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.401 [2024-07-15 13:53:49.149909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.401 [2024-07-15 13:53:49.149935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.401 [2024-07-15 13:53:49.159935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.401 [2024-07-15 13:53:49.159960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.401 [2024-07-15 13:53:49.170048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.401 [2024-07-15 13:53:49.170072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.401 [2024-07-15 13:53:49.180567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.401 [2024-07-15 13:53:49.180591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.401 [2024-07-15 13:53:49.192886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.401 [2024-07-15 13:53:49.192912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.401 [2024-07-15 13:53:49.202436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.401 [2024-07-15 13:53:49.202467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.401 [2024-07-15 13:53:49.213178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.401 [2024-07-15 13:53:49.213202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.401 [2024-07-15 13:53:49.223381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.401 [2024-07-15 13:53:49.223405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.401 [2024-07-15 13:53:49.233961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.401 [2024-07-15 13:53:49.233987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.659 [2024-07-15 13:53:49.244915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.659 [2024-07-15 13:53:49.244943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.659 [2024-07-15 13:53:49.255214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.659 [2024-07-15 13:53:49.255238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.659 [2024-07-15 13:53:49.267680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.659 [2024-07-15 13:53:49.267704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.659 [2024-07-15 13:53:49.276864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.659 [2024-07-15 13:53:49.276890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.659 [2024-07-15 13:53:49.287611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.659 [2024-07-15 13:53:49.287635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.659 [2024-07-15 13:53:49.299601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.659 [2024-07-15 13:53:49.299625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.659 [2024-07-15 13:53:49.309495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.659 [2024-07-15 13:53:49.309519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.659 [2024-07-15 13:53:49.319626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.659 [2024-07-15 13:53:49.319650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.659 [2024-07-15 13:53:49.329446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.659 [2024-07-15 13:53:49.329469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.659 [2024-07-15 13:53:49.339514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.659 [2024-07-15 13:53:49.339538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.659 [2024-07-15 13:53:49.349747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.659 [2024-07-15 13:53:49.349771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.659 [2024-07-15 13:53:49.359974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.659 [2024-07-15 13:53:49.360000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.659 [2024-07-15 13:53:49.370247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.659 [2024-07-15 13:53:49.370270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.659 [2024-07-15 13:53:49.380462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.659 [2024-07-15 13:53:49.380486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.659 [2024-07-15 13:53:49.391160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.659 [2024-07-15 13:53:49.391185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.659 [2024-07-15 13:53:49.401435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.659 [2024-07-15 13:53:49.401466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.659 [2024-07-15 13:53:49.411460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.659 [2024-07-15 13:53:49.411485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.659 [2024-07-15 13:53:49.421499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.659 [2024-07-15 13:53:49.421523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.659 [2024-07-15 13:53:49.431569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.659 [2024-07-15 13:53:49.431593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.659 [2024-07-15 13:53:49.441926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.659 [2024-07-15 13:53:49.441951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.659 [2024-07-15 13:53:49.452331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.660 [2024-07-15 13:53:49.452356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.660 [2024-07-15 13:53:49.462763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.660 [2024-07-15 13:53:49.462804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.660 [2024-07-15 13:53:49.472979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.660 [2024-07-15 13:53:49.473005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.660 [2024-07-15 13:53:49.483004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.660 [2024-07-15 13:53:49.483045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.660 [2024-07-15 13:53:49.493663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.660 [2024-07-15 13:53:49.493688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.919 [2024-07-15 13:53:49.504621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.920 [2024-07-15 13:53:49.504647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.920 [2024-07-15 13:53:49.516758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.920 [2024-07-15 13:53:49.516799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.920 [2024-07-15 13:53:49.526963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.920 [2024-07-15 13:53:49.526990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.920 [2024-07-15 13:53:49.537373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.920 [2024-07-15 13:53:49.537398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.920 [2024-07-15 13:53:49.547748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.920 [2024-07-15 13:53:49.547776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.920 [2024-07-15 13:53:49.558154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.920 [2024-07-15 13:53:49.558179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.920 [2024-07-15 13:53:49.568301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.920 [2024-07-15 13:53:49.568325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.920 [2024-07-15 13:53:49.578700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.920 [2024-07-15 13:53:49.578748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.920 [2024-07-15 13:53:49.591007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.920 [2024-07-15 13:53:49.591048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.920 [2024-07-15 13:53:49.600369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.920 [2024-07-15 13:53:49.600400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.920 [2024-07-15 13:53:49.611791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.920 [2024-07-15 13:53:49.611817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.920 [2024-07-15 13:53:49.623658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.920 [2024-07-15 13:53:49.623682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.920 [2024-07-15 13:53:49.633505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.920 [2024-07-15 13:53:49.633529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.920 [2024-07-15 13:53:49.643602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.920 [2024-07-15 13:53:49.643626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.920 [2024-07-15 13:53:49.653626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.920 [2024-07-15 13:53:49.653650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.920 [2024-07-15 13:53:49.664133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.920 [2024-07-15 13:53:49.664158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.920 [2024-07-15 13:53:49.675153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.920 [2024-07-15 13:53:49.675177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.920 [2024-07-15 13:53:49.685404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.920 [2024-07-15 13:53:49.685428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.920 [2024-07-15 13:53:49.697693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.920 [2024-07-15 13:53:49.697717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.920 [2024-07-15 13:53:49.707642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.920 [2024-07-15 13:53:49.707666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.920 [2024-07-15 13:53:49.718029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.920 [2024-07-15 13:53:49.718060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.920 [2024-07-15 13:53:49.728513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.920 [2024-07-15 13:53:49.728537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.920 [2024-07-15 13:53:49.740358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.920 [2024-07-15 13:53:49.740382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.920 [2024-07-15 13:53:49.750372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.920 [2024-07-15 13:53:49.750396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.179 [2024-07-15 13:53:49.761170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.179 [2024-07-15 13:53:49.761196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.179 [2024-07-15 13:53:49.771174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.179 [2024-07-15 13:53:49.771198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.179 [2024-07-15 13:53:49.781186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.179 [2024-07-15 13:53:49.781209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.179 [2024-07-15 13:53:49.791072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.179 [2024-07-15 13:53:49.791109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.179 [2024-07-15 13:53:49.801671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.179 [2024-07-15 13:53:49.801696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.179 [2024-07-15 13:53:49.812070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.179 [2024-07-15 13:53:49.812108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.179 [2024-07-15 13:53:49.822694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.179 [2024-07-15 13:53:49.822719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.179 [2024-07-15 13:53:49.835104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.179 [2024-07-15 13:53:49.835128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.180 [2024-07-15 13:53:49.844474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.180 [2024-07-15 13:53:49.844498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.180 [2024-07-15 13:53:49.854602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.180 [2024-07-15 13:53:49.854626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.180 [2024-07-15 13:53:49.865146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.180 [2024-07-15 13:53:49.865171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.180 [2024-07-15 13:53:49.877440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.180 [2024-07-15 13:53:49.877464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.180 [2024-07-15 13:53:49.887198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.180 [2024-07-15 13:53:49.887223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.180 [2024-07-15 13:53:49.897904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.180 [2024-07-15 13:53:49.897929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.180 [2024-07-15 13:53:49.908165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.180 [2024-07-15 13:53:49.908189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.180 [2024-07-15 13:53:49.918196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.180 [2024-07-15 13:53:49.918220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.180 [2024-07-15 13:53:49.928625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.180 [2024-07-15 13:53:49.928648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.180 [2024-07-15 13:53:49.941137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.180 [2024-07-15 13:53:49.941161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.180 [2024-07-15 13:53:49.953011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.180 [2024-07-15 13:53:49.953050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.180 [2024-07-15 13:53:49.961555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.180 [2024-07-15 13:53:49.961579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.180 [2024-07-15 13:53:49.972905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.180 [2024-07-15 13:53:49.972932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.180 [2024-07-15 13:53:49.983549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.180 [2024-07-15 13:53:49.983575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.180 [2024-07-15 13:53:49.994129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.180 [2024-07-15 13:53:49.994154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.180 [2024-07-15 13:53:50.008340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.180 [2024-07-15 13:53:50.008372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.440 [2024-07-15 13:53:50.022466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.440 [2024-07-15 13:53:50.022501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.440 [2024-07-15 13:53:50.036135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.440 [2024-07-15 13:53:50.036174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.440 [2024-07-15 13:53:50.047258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.440 [2024-07-15 13:53:50.047284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.440 [2024-07-15 13:53:50.059942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.440 [2024-07-15 13:53:50.059970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.440 [2024-07-15 13:53:50.070454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.440 [2024-07-15 13:53:50.070479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.440 [2024-07-15 13:53:50.081784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.440 [2024-07-15 13:53:50.081810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.440 [2024-07-15 13:53:50.092700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.440 [2024-07-15 13:53:50.092748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.440 [2024-07-15 13:53:50.103436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.440 [2024-07-15 13:53:50.103461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.440 [2024-07-15 13:53:50.114314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.440 [2024-07-15 13:53:50.114338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.440 [2024-07-15 13:53:50.126834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.440 [2024-07-15 13:53:50.126861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.440 [2024-07-15 13:53:50.137340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.440 [2024-07-15 13:53:50.137365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.440 [2024-07-15 13:53:50.148004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.440 [2024-07-15 13:53:50.148043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.440 [2024-07-15 13:53:50.160633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.440 [2024-07-15 13:53:50.160658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.440 [2024-07-15 13:53:50.171047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.440 [2024-07-15 13:53:50.171073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.440 [2024-07-15 13:53:50.182022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.440 [2024-07-15 13:53:50.182062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.440 [2024-07-15 13:53:50.197842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.440 [2024-07-15 13:53:50.197870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.440 [2024-07-15 13:53:50.207438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.440 [2024-07-15 13:53:50.207463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.440 [2024-07-15 13:53:50.217673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.440 [2024-07-15 13:53:50.217698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.440 [2024-07-15 13:53:50.228395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.440 [2024-07-15 13:53:50.228420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.440 [2024-07-15 13:53:50.239175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.440 [2024-07-15 13:53:50.239200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.440 [2024-07-15 13:53:50.250344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.440 [2024-07-15 13:53:50.250370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.440 [2024-07-15 13:53:50.261065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.440 [2024-07-15 13:53:50.261104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.440 [2024-07-15 13:53:50.271786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.440 [2024-07-15 13:53:50.271813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.700 [2024-07-15 13:53:50.283301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.700 [2024-07-15 13:53:50.283328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.700 [2024-07-15 13:53:50.293885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.700 [2024-07-15 13:53:50.293911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.700 [2024-07-15 13:53:50.304322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.700 [2024-07-15 13:53:50.304346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.700 [2024-07-15 13:53:50.317254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.700 [2024-07-15 13:53:50.317279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.700 [2024-07-15 13:53:50.326877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.700 [2024-07-15 13:53:50.326903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.700 [2024-07-15 13:53:50.337876] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.700 [2024-07-15 13:53:50.337903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.700 [2024-07-15 13:53:50.348413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.700 [2024-07-15 13:53:50.348438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.700 [2024-07-15 13:53:50.359246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.700 [2024-07-15 13:53:50.359272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.700 [2024-07-15 13:53:50.371933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.700 [2024-07-15 13:53:50.371959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.700 [2024-07-15 13:53:50.382527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.700 [2024-07-15 13:53:50.382552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.700 [2024-07-15 13:53:50.393070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.700 [2024-07-15 13:53:50.393109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.700 [2024-07-15 13:53:50.403898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.700 [2024-07-15 13:53:50.403925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.701 [2024-07-15 13:53:50.414530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.701 [2024-07-15 13:53:50.414555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.701 [2024-07-15 13:53:50.426590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.701 [2024-07-15 13:53:50.426620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.701 [2024-07-15 13:53:50.436154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.701 [2024-07-15 13:53:50.436178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.701 [2024-07-15 13:53:50.447211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.701 [2024-07-15 13:53:50.447236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.701 [2024-07-15 13:53:50.459106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.701 [2024-07-15 13:53:50.459131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.701 [2024-07-15 13:53:50.468936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.701 [2024-07-15 13:53:50.468963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.701 [2024-07-15 13:53:50.480046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.701 [2024-07-15 13:53:50.480070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.701 [2024-07-15 13:53:50.490040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.701 [2024-07-15 13:53:50.490065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.701 [2024-07-15 13:53:50.500517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.701 [2024-07-15 13:53:50.500541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.701 [2024-07-15 13:53:50.511963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.701 [2024-07-15 13:53:50.511988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.701 [2024-07-15 13:53:50.521695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.701 [2024-07-15 13:53:50.521734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.701 [2024-07-15 13:53:50.532113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.701 [2024-07-15 13:53:50.532137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.961 [2024-07-15 13:53:50.544696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.961 [2024-07-15 13:53:50.544736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.961 [2024-07-15 13:53:50.553664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.961 [2024-07-15 13:53:50.553688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.961 [2024-07-15 13:53:50.566073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.961 [2024-07-15 13:53:50.566111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.961 [2024-07-15 13:53:50.576457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.961 [2024-07-15 13:53:50.576481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.961 [2024-07-15 13:53:50.586766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.961 [2024-07-15 13:53:50.586792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.961 [2024-07-15 13:53:50.596810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.961 [2024-07-15 13:53:50.596836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.961 [2024-07-15 13:53:50.607139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.961 [2024-07-15 13:53:50.607164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.961 [2024-07-15 13:53:50.617054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.961 [2024-07-15 13:53:50.617094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.961 [2024-07-15 13:53:50.627537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.961 [2024-07-15 13:53:50.627569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.961 [2024-07-15 13:53:50.639882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.961 [2024-07-15 13:53:50.639911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.961 [2024-07-15 13:53:50.649687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.961 [2024-07-15 13:53:50.649712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.961 [2024-07-15 13:53:50.660012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.961 [2024-07-15 13:53:50.660052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.961 [2024-07-15 13:53:50.670443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.961 [2024-07-15 13:53:50.670467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.961 [2024-07-15 13:53:50.683265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.961 [2024-07-15 13:53:50.683290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.961 [2024-07-15 13:53:50.693310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.961 [2024-07-15 13:53:50.693334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.961 [2024-07-15 13:53:50.703967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.961 [2024-07-15 13:53:50.703994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.961 [2024-07-15 13:53:50.716266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.961 [2024-07-15 13:53:50.716291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.961 [2024-07-15 13:53:50.725964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.961 [2024-07-15 13:53:50.725991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.961 [2024-07-15 13:53:50.735902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.961 [2024-07-15 13:53:50.735928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.961 [2024-07-15 13:53:50.745931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.961 [2024-07-15 13:53:50.745957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.961 [2024-07-15 13:53:50.756169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.961 [2024-07-15 13:53:50.756193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.961 [2024-07-15 13:53:50.765648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.961 [2024-07-15 13:53:50.765671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.961 [2024-07-15 13:53:50.775827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.961 [2024-07-15 13:53:50.775852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.961 [2024-07-15 13:53:50.785625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.961 [2024-07-15 13:53:50.785649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.961 [2024-07-15 13:53:50.795518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.961 [2024-07-15 13:53:50.795552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.220 [2024-07-15 13:53:50.806324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.220 [2024-07-15 13:53:50.806349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.220 [2024-07-15 13:53:50.818448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.220 [2024-07-15 13:53:50.818473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.220 [2024-07-15 13:53:50.829975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.220 [2024-07-15 13:53:50.830008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.220 [2024-07-15 13:53:50.838776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.220 [2024-07-15 13:53:50.838801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.220 [2024-07-15 13:53:50.849680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.220 [2024-07-15 13:53:50.849704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.220 [2024-07-15 13:53:50.861376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.220 [2024-07-15 13:53:50.861400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.220 [2024-07-15 13:53:50.871319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.220 [2024-07-15 13:53:50.871343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.220 [2024-07-15 13:53:50.881838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.220 [2024-07-15 13:53:50.881864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.220 [2024-07-15 13:53:50.893608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.220 [2024-07-15 13:53:50.893631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.220 [2024-07-15 13:53:50.903009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.220 [2024-07-15 13:53:50.903048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.220 [2024-07-15 13:53:50.913233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.220 [2024-07-15 13:53:50.913257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.220 [2024-07-15 13:53:50.923332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.220 [2024-07-15 13:53:50.923355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.220 [2024-07-15 13:53:50.933426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.220 [2024-07-15 13:53:50.933450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.220 [2024-07-15 13:53:50.943561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.220 [2024-07-15 13:53:50.943585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.220 [2024-07-15 13:53:50.951506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.220 [2024-07-15 13:53:50.951530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.220 00:14:56.220 Latency(us) 00:14:56.220 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.220 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:56.220 Nvme1n1 : 5.01 12189.89 95.23 0.00 0.00 10486.82 4587.52 22136.60 00:14:56.220 =================================================================================================================== 00:14:56.220 Total : 12189.89 95.23 0.00 0.00 10486.82 4587.52 22136.60 00:14:56.220 [2024-07-15 13:53:50.957943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.220 [2024-07-15 13:53:50.957967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.220 [2024-07-15 13:53:50.965959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.220 [2024-07-15 13:53:50.965982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.220 [2024-07-15 13:53:50.973979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.220 [2024-07-15 13:53:50.974001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.220 [2024-07-15 13:53:50.982071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.220 [2024-07-15 13:53:50.982136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.220 [2024-07-15 13:53:50.990126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.220 [2024-07-15 13:53:50.990179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.220 [2024-07-15 13:53:50.998104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.220 [2024-07-15 13:53:50.998157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.220 [2024-07-15 13:53:51.006132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.220 [2024-07-15 13:53:51.006183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.220 [2024-07-15 13:53:51.014163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.220 [2024-07-15 13:53:51.014218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.220 [2024-07-15 13:53:51.022173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.220 [2024-07-15 13:53:51.022226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.220 [2024-07-15 13:53:51.030192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.220 [2024-07-15 13:53:51.030242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.220 [2024-07-15 13:53:51.038216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.220 [2024-07-15 13:53:51.038268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.220 [2024-07-15 13:53:51.046240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.220 [2024-07-15 13:53:51.046302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.220 [2024-07-15 13:53:51.054263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.220 [2024-07-15 13:53:51.054312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.480 [2024-07-15 13:53:51.062292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.480 [2024-07-15 13:53:51.062342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.480 [2024-07-15 13:53:51.070301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.480 [2024-07-15 13:53:51.070353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.480 [2024-07-15 13:53:51.078325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.480 [2024-07-15 13:53:51.078373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.480 [2024-07-15 13:53:51.086355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.480 [2024-07-15 13:53:51.086404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.480 [2024-07-15 13:53:51.094383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.480 [2024-07-15 13:53:51.094426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.480 [2024-07-15 13:53:51.102324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.480 [2024-07-15 13:53:51.102344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.480 [2024-07-15 13:53:51.110345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.480 [2024-07-15 13:53:51.110365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.480 [2024-07-15 13:53:51.118368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.480 [2024-07-15 13:53:51.118388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.480 [2024-07-15 13:53:51.126394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.480 [2024-07-15 13:53:51.126415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.480 [2024-07-15 13:53:51.134477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.480 [2024-07-15 13:53:51.134525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.480 [2024-07-15 13:53:51.142498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.480 [2024-07-15 13:53:51.142548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.480 [2024-07-15 13:53:51.150486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.480 [2024-07-15 13:53:51.150521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.480 [2024-07-15 13:53:51.158474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.480 [2024-07-15 13:53:51.158494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.480 [2024-07-15 13:53:51.166495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.480 [2024-07-15 13:53:51.166514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.480 [2024-07-15 13:53:51.174517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.480 [2024-07-15 13:53:51.174551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.480 [2024-07-15 13:53:51.182555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.480 [2024-07-15 13:53:51.182579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.480 [2024-07-15 13:53:51.190632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.480 [2024-07-15 13:53:51.190685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.480 [2024-07-15 13:53:51.198654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.480 [2024-07-15 13:53:51.198704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.480 [2024-07-15 13:53:51.206606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.480 [2024-07-15 13:53:51.206627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.480 [2024-07-15 13:53:51.214624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.480 [2024-07-15 13:53:51.214643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.480 [2024-07-15 13:53:51.222644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.480 [2024-07-15 13:53:51.222663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3733483) - No such process 00:14:56.480 13:53:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3733483 00:14:56.480 13:53:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.480 13:53:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.480 13:53:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:56.480 13:53:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.480 13:53:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:56.480 13:53:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.480 13:53:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:56.480 delay0 00:14:56.480 13:53:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.480 13:53:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:56.480 13:53:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.480 13:53:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:56.480 13:53:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.480 13:53:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:14:56.480 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.739 [2024-07-15 13:53:51.376888] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:04.849 Initializing NVMe Controllers 00:15:04.849 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:04.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:04.849 Initialization complete. Launching workers. 00:15:04.849 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 240, failed: 23218 00:15:04.849 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 23343, failed to submit 115 00:15:04.849 success 23246, unsuccess 97, failed 0 00:15:04.849 13:53:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:04.849 13:53:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:15:04.849 13:53:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:04.849 13:53:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:15:04.849 13:53:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:04.849 13:53:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:15:04.849 13:53:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:04.849 13:53:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:04.849 rmmod nvme_tcp 00:15:04.849 rmmod nvme_fabrics 00:15:04.849 rmmod nvme_keyring 00:15:04.849 13:53:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:04.849 13:53:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:15:04.849 13:53:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:15:04.849 13:53:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3732144 ']' 00:15:04.849 13:53:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3732144 00:15:04.849 13:53:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 3732144 ']' 00:15:04.849 13:53:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 3732144 00:15:04.849 13:53:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:15:04.849 13:53:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:04.849 13:53:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3732144 00:15:04.849 13:53:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:04.849 13:53:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:04.849 13:53:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3732144' 00:15:04.849 killing process with pid 3732144 00:15:04.849 13:53:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 3732144 00:15:04.849 13:53:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 3732144 00:15:04.849 13:53:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:04.849 13:53:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:04.849 13:53:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:04.850 13:53:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:04.850 13:53:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:04.850 13:53:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.850 13:53:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:04.850 13:53:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.225 13:54:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:06.225 00:15:06.225 real 0m28.801s 00:15:06.225 user 0m40.278s 00:15:06.225 sys 0m11.106s 00:15:06.225 13:54:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:06.225 13:54:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:06.225 ************************************ 00:15:06.225 END TEST nvmf_zcopy 00:15:06.225 ************************************ 00:15:06.225 13:54:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:06.225 13:54:01 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:06.226 13:54:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:06.226 13:54:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:06.226 13:54:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:06.226 ************************************ 00:15:06.226 START TEST nvmf_nmic 00:15:06.226 ************************************ 00:15:06.226 13:54:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:06.503 * Looking for test storage... 00:15:06.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:06.503 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:06.504 13:54:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:06.504 13:54:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:06.504 13:54:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:06.504 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:06.504 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:06.504 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:06.504 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:06.504 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:06.504 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.504 13:54:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.504 13:54:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.504 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:06.504 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:06.504 13:54:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:15:06.504 13:54:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:08.433 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:08.433 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:08.433 Found net devices under 0000:84:00.0: cvl_0_0 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:08.433 Found net devices under 0000:84:00.1: cvl_0_1 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:08.433 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:08.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:08.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:15:08.691 00:15:08.691 --- 10.0.0.2 ping statistics --- 00:15:08.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.691 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:08.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:08.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:15:08.691 00:15:08.691 --- 10.0.0.1 ping statistics --- 00:15:08.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.691 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3737136 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3737136 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 3737136 ']' 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:08.691 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:08.691 [2024-07-15 13:54:03.418490] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:15:08.691 [2024-07-15 13:54:03.418583] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.691 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.691 [2024-07-15 13:54:03.490871] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:08.948 [2024-07-15 13:54:03.612029] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.949 [2024-07-15 13:54:03.612086] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.949 [2024-07-15 13:54:03.612116] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:08.949 [2024-07-15 13:54:03.612129] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:08.949 [2024-07-15 13:54:03.612139] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.949 [2024-07-15 13:54:03.612192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.949 [2024-07-15 13:54:03.612584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.949 [2024-07-15 13:54:03.612609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:08.949 [2024-07-15 13:54:03.612613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.949 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:08.949 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:15:08.949 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:08.949 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:08.949 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:08.949 13:54:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:08.949 13:54:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:08.949 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.949 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:08.949 [2024-07-15 13:54:03.757461] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:08.949 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.949 13:54:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:08.949 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.949 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:08.949 Malloc0 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:09.208 [2024-07-15 13:54:03.808713] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:09.208 test case1: single bdev can't be used in multiple subsystems 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:09.208 [2024-07-15 13:54:03.832573] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:09.208 [2024-07-15 13:54:03.832601] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:09.208 [2024-07-15 13:54:03.832633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:09.208 request: 00:15:09.208 { 00:15:09.208 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:09.208 "namespace": { 00:15:09.208 "bdev_name": "Malloc0", 00:15:09.208 "no_auto_visible": false 00:15:09.208 }, 00:15:09.208 "method": "nvmf_subsystem_add_ns", 00:15:09.208 "req_id": 1 00:15:09.208 } 00:15:09.208 Got JSON-RPC error response 00:15:09.208 response: 00:15:09.208 { 00:15:09.208 "code": -32602, 00:15:09.208 "message": "Invalid parameters" 00:15:09.208 } 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:09.208 Adding namespace failed - expected result. 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:09.208 test case2: host connect to nvmf target in multiple paths 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:09.208 [2024-07-15 13:54:03.840683] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.208 13:54:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:09.774 13:54:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:10.341 13:54:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:10.341 13:54:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:15:10.341 13:54:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:10.341 13:54:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:10.341 13:54:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:15:12.245 13:54:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:12.245 13:54:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:12.245 13:54:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:12.245 13:54:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:12.245 13:54:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:12.245 13:54:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:15:12.245 13:54:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:12.503 [global] 00:15:12.503 thread=1 00:15:12.503 invalidate=1 00:15:12.503 rw=write 00:15:12.503 time_based=1 00:15:12.503 runtime=1 00:15:12.503 ioengine=libaio 00:15:12.503 direct=1 00:15:12.503 bs=4096 00:15:12.503 iodepth=1 00:15:12.503 norandommap=0 00:15:12.503 numjobs=1 00:15:12.503 00:15:12.503 verify_dump=1 00:15:12.503 verify_backlog=512 00:15:12.503 verify_state_save=0 00:15:12.503 do_verify=1 00:15:12.503 verify=crc32c-intel 00:15:12.503 [job0] 00:15:12.503 filename=/dev/nvme0n1 00:15:12.503 Could not set queue depth (nvme0n1) 00:15:12.503 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:12.503 fio-3.35 00:15:12.503 Starting 1 thread 00:15:13.875 00:15:13.875 job0: (groupid=0, jobs=1): err= 0: pid=3737727: Mon Jul 15 13:54:08 2024 00:15:13.875 read: IOPS=514, BW=2056KiB/s (2106kB/s)(2112KiB/1027msec) 00:15:13.875 slat (nsec): min=6667, max=45114, avg=9750.13, stdev=3758.22 00:15:13.875 clat (usec): min=203, max=42034, avg=1496.76, stdev=6995.82 00:15:13.875 lat (usec): min=211, max=42053, avg=1506.51, stdev=6997.22 00:15:13.875 clat percentiles (usec): 00:15:13.875 | 1.00th=[ 217], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 237], 00:15:13.875 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 265], 00:15:13.875 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 314], 95.00th=[ 330], 00:15:13.875 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:15:13.875 | 99.99th=[42206] 00:15:13.875 write: IOPS=997, BW=3988KiB/s (4084kB/s)(4096KiB/1027msec); 0 zone resets 00:15:13.876 slat (usec): min=7, max=28788, avg=41.14, stdev=899.25 00:15:13.876 clat (usec): min=131, max=718, avg=179.41, stdev=31.77 00:15:13.876 lat (usec): min=139, max=29006, avg=220.55, stdev=901.08 00:15:13.876 clat percentiles (usec): 00:15:13.876 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:15:13.876 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 180], 00:15:13.876 | 70.00th=[ 188], 80.00th=[ 198], 90.00th=[ 221], 95.00th=[ 239], 00:15:13.876 | 99.00th=[ 253], 99.50th=[ 262], 99.90th=[ 306], 99.95th=[ 717], 00:15:13.876 | 99.99th=[ 717] 00:15:13.876 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:15:13.876 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:13.876 lat (usec) : 250=79.25%, 500=19.65%, 750=0.06% 00:15:13.876 lat (msec) : 50=1.03% 00:15:13.876 cpu : usr=1.07%, sys=2.53%, ctx=1555, majf=0, minf=2 00:15:13.876 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:13.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:13.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:13.876 issued rwts: total=528,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:13.876 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:13.876 00:15:13.876 Run status group 0 (all jobs): 00:15:13.876 READ: bw=2056KiB/s (2106kB/s), 2056KiB/s-2056KiB/s (2106kB/s-2106kB/s), io=2112KiB (2163kB), run=1027-1027msec 00:15:13.876 WRITE: bw=3988KiB/s (4084kB/s), 3988KiB/s-3988KiB/s (4084kB/s-4084kB/s), io=4096KiB (4194kB), run=1027-1027msec 00:15:13.876 00:15:13.876 Disk stats (read/write): 00:15:13.876 nvme0n1: ios=576/1024, merge=0/0, ticks=977/181, in_queue=1158, util=98.70% 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:13.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:13.876 rmmod nvme_tcp 00:15:13.876 rmmod nvme_fabrics 00:15:13.876 rmmod nvme_keyring 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3737136 ']' 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3737136 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 3737136 ']' 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 3737136 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3737136 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3737136' 00:15:13.876 killing process with pid 3737136 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 3737136 00:15:13.876 13:54:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 3737136 00:15:14.442 13:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:14.442 13:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:14.442 13:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:14.442 13:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:14.442 13:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:14.442 13:54:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.442 13:54:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:14.442 13:54:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.346 13:54:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:16.346 00:15:16.346 real 0m9.981s 00:15:16.346 user 0m22.296s 00:15:16.346 sys 0m2.378s 00:15:16.346 13:54:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:16.346 13:54:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:16.346 ************************************ 00:15:16.346 END TEST nvmf_nmic 00:15:16.346 ************************************ 00:15:16.346 13:54:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:16.346 13:54:11 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:16.346 13:54:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:16.346 13:54:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:16.346 13:54:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:16.346 ************************************ 00:15:16.346 START TEST nvmf_fio_target 00:15:16.346 ************************************ 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:16.346 * Looking for test storage... 00:15:16.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:16.346 13:54:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.250 13:54:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:18.250 13:54:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:18.250 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:18.250 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:18.251 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:18.251 Found net devices under 0000:84:00.0: cvl_0_0 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:18.251 Found net devices under 0000:84:00.1: cvl_0_1 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:18.251 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:18.517 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:18.517 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:18.517 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:18.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:18.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:15:18.517 00:15:18.517 --- 10.0.0.2 ping statistics --- 00:15:18.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.517 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:15:18.517 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:18.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:18.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:15:18.518 00:15:18.518 --- 10.0.0.1 ping statistics --- 00:15:18.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.518 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:15:18.518 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:18.518 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:15:18.518 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:18.518 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:18.518 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:18.518 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:18.518 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:18.518 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:18.518 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:18.518 13:54:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:18.518 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:18.518 13:54:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:18.518 13:54:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.518 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3740244 00:15:18.519 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:18.519 13:54:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3740244 00:15:18.519 13:54:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 3740244 ']' 00:15:18.519 13:54:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.519 13:54:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:18.519 13:54:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.519 13:54:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:18.519 13:54:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.520 [2024-07-15 13:54:13.223076] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:15:18.520 [2024-07-15 13:54:13.223159] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.520 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.520 [2024-07-15 13:54:13.297209] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:18.779 [2024-07-15 13:54:13.409344] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.779 [2024-07-15 13:54:13.409400] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.779 [2024-07-15 13:54:13.409428] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:18.779 [2024-07-15 13:54:13.409439] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:18.779 [2024-07-15 13:54:13.409449] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.780 [2024-07-15 13:54:13.409525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.780 [2024-07-15 13:54:13.409580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.780 [2024-07-15 13:54:13.409650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.780 [2024-07-15 13:54:13.409647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:19.346 13:54:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:19.346 13:54:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:15:19.346 13:54:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:19.346 13:54:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:19.346 13:54:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.346 13:54:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.346 13:54:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:19.644 [2024-07-15 13:54:14.410522] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.644 13:54:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:19.933 13:54:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:19.933 13:54:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:20.190 13:54:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:20.190 13:54:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:20.447 13:54:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:20.447 13:54:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:20.704 13:54:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:20.704 13:54:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:20.960 13:54:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:21.217 13:54:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:21.217 13:54:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:21.783 13:54:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:21.783 13:54:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:21.783 13:54:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:21.783 13:54:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:22.040 13:54:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:22.296 13:54:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:22.296 13:54:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:22.553 13:54:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:22.553 13:54:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:22.811 13:54:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:23.069 [2024-07-15 13:54:17.813407] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:23.069 13:54:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:23.327 13:54:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:23.584 13:54:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:24.151 13:54:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:24.151 13:54:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:15:24.151 13:54:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:24.151 13:54:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:15:24.151 13:54:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:15:24.151 13:54:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:15:26.683 13:54:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:26.683 13:54:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:26.683 13:54:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:26.683 13:54:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:15:26.683 13:54:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:26.683 13:54:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:15:26.683 13:54:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:26.683 [global] 00:15:26.683 thread=1 00:15:26.683 invalidate=1 00:15:26.683 rw=write 00:15:26.683 time_based=1 00:15:26.683 runtime=1 00:15:26.683 ioengine=libaio 00:15:26.683 direct=1 00:15:26.683 bs=4096 00:15:26.683 iodepth=1 00:15:26.683 norandommap=0 00:15:26.683 numjobs=1 00:15:26.683 00:15:26.683 verify_dump=1 00:15:26.683 verify_backlog=512 00:15:26.683 verify_state_save=0 00:15:26.683 do_verify=1 00:15:26.683 verify=crc32c-intel 00:15:26.683 [job0] 00:15:26.683 filename=/dev/nvme0n1 00:15:26.683 [job1] 00:15:26.683 filename=/dev/nvme0n2 00:15:26.683 [job2] 00:15:26.683 filename=/dev/nvme0n3 00:15:26.683 [job3] 00:15:26.683 filename=/dev/nvme0n4 00:15:26.683 Could not set queue depth (nvme0n1) 00:15:26.683 Could not set queue depth (nvme0n2) 00:15:26.683 Could not set queue depth (nvme0n3) 00:15:26.683 Could not set queue depth (nvme0n4) 00:15:26.683 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:26.683 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:26.683 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:26.683 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:26.683 fio-3.35 00:15:26.683 Starting 4 threads 00:15:27.620 00:15:27.620 job0: (groupid=0, jobs=1): err= 0: pid=3741332: Mon Jul 15 13:54:22 2024 00:15:27.620 read: IOPS=1021, BW=4088KiB/s (4186kB/s)(4100KiB/1003msec) 00:15:27.620 slat (nsec): min=4923, max=60371, avg=13197.33, stdev=6587.44 00:15:27.620 clat (usec): min=207, max=41171, avg=607.45, stdev=3578.20 00:15:27.620 lat (usec): min=214, max=41187, avg=620.65, stdev=3578.93 00:15:27.620 clat percentiles (usec): 00:15:27.620 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 253], 00:15:27.620 | 30.00th=[ 277], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 302], 00:15:27.620 | 70.00th=[ 310], 80.00th=[ 318], 90.00th=[ 334], 95.00th=[ 351], 00:15:27.620 | 99.00th=[ 441], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:15:27.620 | 99.99th=[41157] 00:15:27.620 write: IOPS=1531, BW=6126KiB/s (6273kB/s)(6144KiB/1003msec); 0 zone resets 00:15:27.620 slat (nsec): min=7896, max=94137, avg=18266.14, stdev=7919.51 00:15:27.620 clat (usec): min=145, max=413, avg=212.88, stdev=40.42 00:15:27.620 lat (usec): min=161, max=436, avg=231.14, stdev=40.00 00:15:27.620 clat percentiles (usec): 00:15:27.620 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 184], 00:15:27.620 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 204], 60.00th=[ 210], 00:15:27.620 | 70.00th=[ 219], 80.00th=[ 239], 90.00th=[ 281], 95.00th=[ 297], 00:15:27.620 | 99.00th=[ 326], 99.50th=[ 334], 99.90th=[ 363], 99.95th=[ 412], 00:15:27.620 | 99.99th=[ 412] 00:15:27.620 bw ( KiB/s): min= 4096, max= 8192, per=43.71%, avg=6144.00, stdev=2896.31, samples=2 00:15:27.620 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:15:27.620 lat (usec) : 250=56.58%, 500=43.07%, 750=0.04% 00:15:27.620 lat (msec) : 50=0.31% 00:15:27.620 cpu : usr=2.50%, sys=4.69%, ctx=2561, majf=0, minf=1 00:15:27.620 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:27.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.620 issued rwts: total=1025,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:27.620 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:27.620 job1: (groupid=0, jobs=1): err= 0: pid=3741333: Mon Jul 15 13:54:22 2024 00:15:27.620 read: IOPS=21, BW=87.3KiB/s (89.4kB/s)(88.0KiB/1008msec) 00:15:27.620 slat (nsec): min=7018, max=43276, avg=26270.59, stdev=10700.23 00:15:27.620 clat (usec): min=322, max=41111, avg=39098.07, stdev=8661.36 00:15:27.620 lat (usec): min=335, max=41125, avg=39124.34, stdev=8664.18 00:15:27.620 clat percentiles (usec): 00:15:27.620 | 1.00th=[ 322], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:15:27.620 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:27.620 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:27.620 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:27.620 | 99.99th=[41157] 00:15:27.620 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:15:27.620 slat (nsec): min=6618, max=27148, avg=10641.46, stdev=4043.27 00:15:27.620 clat (usec): min=139, max=459, avg=272.04, stdev=74.84 00:15:27.620 lat (usec): min=148, max=472, avg=282.68, stdev=75.77 00:15:27.620 clat percentiles (usec): 00:15:27.620 | 1.00th=[ 155], 5.00th=[ 190], 10.00th=[ 202], 20.00th=[ 215], 00:15:27.620 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 245], 60.00th=[ 258], 00:15:27.620 | 70.00th=[ 302], 80.00th=[ 334], 90.00th=[ 416], 95.00th=[ 424], 00:15:27.620 | 99.00th=[ 437], 99.50th=[ 441], 99.90th=[ 461], 99.95th=[ 461], 00:15:27.620 | 99.99th=[ 461] 00:15:27.620 bw ( KiB/s): min= 4096, max= 4096, per=29.14%, avg=4096.00, stdev= 0.00, samples=1 00:15:27.620 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:27.620 lat (usec) : 250=54.68%, 500=41.39% 00:15:27.620 lat (msec) : 50=3.93% 00:15:27.620 cpu : usr=0.40%, sys=0.40%, ctx=536, majf=0, minf=2 00:15:27.620 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:27.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.621 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:27.621 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:27.621 job2: (groupid=0, jobs=1): err= 0: pid=3741334: Mon Jul 15 13:54:22 2024 00:15:27.621 read: IOPS=516, BW=2067KiB/s (2116kB/s)(2108KiB/1020msec) 00:15:27.621 slat (nsec): min=4934, max=46143, avg=8333.63, stdev=4992.56 00:15:27.621 clat (usec): min=208, max=41370, avg=1504.96, stdev=6993.66 00:15:27.621 lat (usec): min=213, max=41379, avg=1513.29, stdev=6997.03 00:15:27.621 clat percentiles (usec): 00:15:27.621 | 1.00th=[ 219], 5.00th=[ 229], 10.00th=[ 237], 20.00th=[ 243], 00:15:27.621 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 262], 00:15:27.621 | 70.00th=[ 273], 80.00th=[ 293], 90.00th=[ 326], 95.00th=[ 424], 00:15:27.621 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:27.621 | 99.99th=[41157] 00:15:27.621 write: IOPS=1003, BW=4016KiB/s (4112kB/s)(4096KiB/1020msec); 0 zone resets 00:15:27.621 slat (usec): min=6, max=677, avg=13.44, stdev=21.39 00:15:27.621 clat (usec): min=158, max=397, avg=199.17, stdev=26.31 00:15:27.621 lat (usec): min=169, max=874, avg=212.61, stdev=36.24 00:15:27.621 clat percentiles (usec): 00:15:27.621 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 178], 00:15:27.621 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 194], 60.00th=[ 204], 00:15:27.621 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 231], 95.00th=[ 241], 00:15:27.621 | 99.00th=[ 273], 99.50th=[ 302], 99.90th=[ 375], 99.95th=[ 400], 00:15:27.621 | 99.99th=[ 400] 00:15:27.621 bw ( KiB/s): min= 8192, max= 8192, per=58.29%, avg=8192.00, stdev= 0.00, samples=1 00:15:27.621 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:27.621 lat (usec) : 250=77.43%, 500=21.34%, 750=0.19% 00:15:27.621 lat (msec) : 50=1.03% 00:15:27.621 cpu : usr=1.18%, sys=1.47%, ctx=1551, majf=0, minf=1 00:15:27.621 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:27.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.621 issued rwts: total=527,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:27.621 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:27.621 job3: (groupid=0, jobs=1): err= 0: pid=3741335: Mon Jul 15 13:54:22 2024 00:15:27.621 read: IOPS=20, BW=83.7KiB/s (85.8kB/s)(84.0KiB/1003msec) 00:15:27.621 slat (nsec): min=8551, max=41632, avg=28249.05, stdev=10473.38 00:15:27.621 clat (usec): min=40850, max=41195, avg=40972.24, stdev=89.40 00:15:27.621 lat (usec): min=40885, max=41211, avg=41000.49, stdev=83.49 00:15:27.621 clat percentiles (usec): 00:15:27.621 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:15:27.621 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:27.621 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:27.621 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:27.621 | 99.99th=[41157] 00:15:27.621 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:15:27.621 slat (nsec): min=8326, max=27465, avg=11368.12, stdev=3317.11 00:15:27.621 clat (usec): min=164, max=415, avg=261.89, stdev=58.07 00:15:27.621 lat (usec): min=173, max=431, avg=273.26, stdev=59.33 00:15:27.621 clat percentiles (usec): 00:15:27.621 | 1.00th=[ 167], 5.00th=[ 180], 10.00th=[ 196], 20.00th=[ 212], 00:15:27.621 | 30.00th=[ 221], 40.00th=[ 229], 50.00th=[ 245], 60.00th=[ 273], 00:15:27.621 | 70.00th=[ 289], 80.00th=[ 322], 90.00th=[ 343], 95.00th=[ 363], 00:15:27.621 | 99.00th=[ 396], 99.50th=[ 404], 99.90th=[ 416], 99.95th=[ 416], 00:15:27.621 | 99.99th=[ 416] 00:15:27.621 bw ( KiB/s): min= 4096, max= 4096, per=29.14%, avg=4096.00, stdev= 0.00, samples=1 00:15:27.621 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:27.621 lat (usec) : 250=48.59%, 500=47.47% 00:15:27.621 lat (msec) : 50=3.94% 00:15:27.621 cpu : usr=0.30%, sys=0.80%, ctx=535, majf=0, minf=1 00:15:27.621 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:27.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.621 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:27.621 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:27.621 00:15:27.621 Run status group 0 (all jobs): 00:15:27.621 READ: bw=6255KiB/s (6405kB/s), 83.7KiB/s-4088KiB/s (85.8kB/s-4186kB/s), io=6380KiB (6533kB), run=1003-1020msec 00:15:27.621 WRITE: bw=13.7MiB/s (14.4MB/s), 2032KiB/s-6126KiB/s (2081kB/s-6273kB/s), io=14.0MiB (14.7MB), run=1003-1020msec 00:15:27.621 00:15:27.621 Disk stats (read/write): 00:15:27.621 nvme0n1: ios=1074/1039, merge=0/0, ticks=600/217, in_queue=817, util=86.97% 00:15:27.621 nvme0n2: ios=55/512, merge=0/0, ticks=1618/136, in_queue=1754, util=89.32% 00:15:27.621 nvme0n3: ios=579/1024, merge=0/0, ticks=656/198, in_queue=854, util=94.98% 00:15:27.621 nvme0n4: ios=76/512, merge=0/0, ticks=1271/128, in_queue=1399, util=95.89% 00:15:27.621 13:54:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:27.621 [global] 00:15:27.621 thread=1 00:15:27.621 invalidate=1 00:15:27.621 rw=randwrite 00:15:27.621 time_based=1 00:15:27.621 runtime=1 00:15:27.621 ioengine=libaio 00:15:27.621 direct=1 00:15:27.621 bs=4096 00:15:27.621 iodepth=1 00:15:27.621 norandommap=0 00:15:27.621 numjobs=1 00:15:27.621 00:15:27.621 verify_dump=1 00:15:27.621 verify_backlog=512 00:15:27.621 verify_state_save=0 00:15:27.621 do_verify=1 00:15:27.621 verify=crc32c-intel 00:15:27.621 [job0] 00:15:27.621 filename=/dev/nvme0n1 00:15:27.621 [job1] 00:15:27.621 filename=/dev/nvme0n2 00:15:27.621 [job2] 00:15:27.621 filename=/dev/nvme0n3 00:15:27.621 [job3] 00:15:27.621 filename=/dev/nvme0n4 00:15:27.621 Could not set queue depth (nvme0n1) 00:15:27.621 Could not set queue depth (nvme0n2) 00:15:27.621 Could not set queue depth (nvme0n3) 00:15:27.621 Could not set queue depth (nvme0n4) 00:15:27.879 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:27.879 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:27.879 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:27.879 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:27.879 fio-3.35 00:15:27.879 Starting 4 threads 00:15:29.251 00:15:29.251 job0: (groupid=0, jobs=1): err= 0: pid=3741683: Mon Jul 15 13:54:23 2024 00:15:29.252 read: IOPS=1600, BW=6402KiB/s (6555kB/s)(6408KiB/1001msec) 00:15:29.252 slat (nsec): min=7128, max=51953, avg=12746.42, stdev=5725.84 00:15:29.252 clat (usec): min=205, max=1391, avg=306.99, stdev=66.58 00:15:29.252 lat (usec): min=213, max=1399, avg=319.74, stdev=70.10 00:15:29.252 clat percentiles (usec): 00:15:29.252 | 1.00th=[ 217], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 245], 00:15:29.252 | 30.00th=[ 262], 40.00th=[ 285], 50.00th=[ 306], 60.00th=[ 326], 00:15:29.252 | 70.00th=[ 347], 80.00th=[ 359], 90.00th=[ 379], 95.00th=[ 396], 00:15:29.252 | 99.00th=[ 490], 99.50th=[ 498], 99.90th=[ 717], 99.95th=[ 1385], 00:15:29.252 | 99.99th=[ 1385] 00:15:29.252 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:15:29.252 slat (nsec): min=7789, max=72052, avg=16768.77, stdev=8194.07 00:15:29.252 clat (usec): min=140, max=950, avg=213.41, stdev=43.64 00:15:29.252 lat (usec): min=149, max=969, avg=230.18, stdev=48.23 00:15:29.252 clat percentiles (usec): 00:15:29.252 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 172], 00:15:29.252 | 30.00th=[ 192], 40.00th=[ 212], 50.00th=[ 221], 60.00th=[ 227], 00:15:29.252 | 70.00th=[ 233], 80.00th=[ 241], 90.00th=[ 253], 95.00th=[ 265], 00:15:29.252 | 99.00th=[ 326], 99.50th=[ 363], 99.90th=[ 457], 99.95th=[ 824], 00:15:29.252 | 99.99th=[ 955] 00:15:29.252 bw ( KiB/s): min= 8192, max= 8192, per=58.69%, avg=8192.00, stdev= 0.00, samples=1 00:15:29.252 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:29.252 lat (usec) : 250=60.03%, 500=39.73%, 750=0.16%, 1000=0.05% 00:15:29.252 lat (msec) : 2=0.03% 00:15:29.252 cpu : usr=4.00%, sys=7.20%, ctx=3651, majf=0, minf=1 00:15:29.252 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:29.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.252 issued rwts: total=1602,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.252 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:29.252 job1: (groupid=0, jobs=1): err= 0: pid=3741684: Mon Jul 15 13:54:23 2024 00:15:29.252 read: IOPS=21, BW=87.6KiB/s (89.8kB/s)(88.0KiB/1004msec) 00:15:29.252 slat (nsec): min=8799, max=36920, avg=20982.77, stdev=9011.11 00:15:29.252 clat (usec): min=40852, max=42001, avg=41019.23, stdev=228.64 00:15:29.252 lat (usec): min=40880, max=42016, avg=41040.21, stdev=226.11 00:15:29.252 clat percentiles (usec): 00:15:29.252 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:15:29.252 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:29.252 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:29.252 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:29.252 | 99.99th=[42206] 00:15:29.252 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:15:29.252 slat (nsec): min=7304, max=38808, avg=10855.05, stdev=4973.95 00:15:29.252 clat (usec): min=151, max=4143, avg=180.47, stdev=176.05 00:15:29.252 lat (usec): min=160, max=4154, avg=191.32, stdev=176.14 00:15:29.252 clat percentiles (usec): 00:15:29.252 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 159], 20.00th=[ 163], 00:15:29.252 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 174], 00:15:29.252 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 186], 95.00th=[ 194], 00:15:29.252 | 99.00th=[ 235], 99.50th=[ 281], 99.90th=[ 4146], 99.95th=[ 4146], 00:15:29.252 | 99.99th=[ 4146] 00:15:29.252 bw ( KiB/s): min= 4096, max= 4096, per=29.34%, avg=4096.00, stdev= 0.00, samples=1 00:15:29.252 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:29.252 lat (usec) : 250=94.94%, 500=0.75% 00:15:29.252 lat (msec) : 10=0.19%, 50=4.12% 00:15:29.252 cpu : usr=0.40%, sys=0.30%, ctx=535, majf=0, minf=1 00:15:29.252 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:29.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.252 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.252 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:29.252 job2: (groupid=0, jobs=1): err= 0: pid=3741685: Mon Jul 15 13:54:23 2024 00:15:29.252 read: IOPS=26, BW=105KiB/s (108kB/s)(108KiB/1027msec) 00:15:29.252 slat (nsec): min=7443, max=49287, avg=24789.96, stdev=12721.15 00:15:29.252 clat (usec): min=321, max=41960, avg=33506.25, stdev=16095.35 00:15:29.252 lat (usec): min=340, max=41977, avg=33531.04, stdev=16097.54 00:15:29.252 clat percentiles (usec): 00:15:29.252 | 1.00th=[ 322], 5.00th=[ 371], 10.00th=[ 375], 20.00th=[40633], 00:15:29.252 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:29.252 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:29.252 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:29.252 | 99.99th=[42206] 00:15:29.252 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:15:29.252 slat (nsec): min=7338, max=34052, avg=8699.30, stdev=1740.20 00:15:29.252 clat (usec): min=157, max=478, avg=221.46, stdev=21.23 00:15:29.252 lat (usec): min=165, max=512, avg=230.16, stdev=21.80 00:15:29.252 clat percentiles (usec): 00:15:29.252 | 1.00th=[ 167], 5.00th=[ 188], 10.00th=[ 202], 20.00th=[ 210], 00:15:29.252 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 227], 00:15:29.252 | 70.00th=[ 231], 80.00th=[ 235], 90.00th=[ 241], 95.00th=[ 245], 00:15:29.252 | 99.00th=[ 258], 99.50th=[ 265], 99.90th=[ 478], 99.95th=[ 478], 00:15:29.252 | 99.99th=[ 478] 00:15:29.252 bw ( KiB/s): min= 4096, max= 4096, per=29.34%, avg=4096.00, stdev= 0.00, samples=1 00:15:29.252 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:29.252 lat (usec) : 250=92.76%, 500=3.15% 00:15:29.252 lat (msec) : 50=4.08% 00:15:29.252 cpu : usr=0.29%, sys=0.39%, ctx=541, majf=0, minf=1 00:15:29.252 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:29.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.252 issued rwts: total=27,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.252 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:29.252 job3: (groupid=0, jobs=1): err= 0: pid=3741686: Mon Jul 15 13:54:23 2024 00:15:29.252 read: IOPS=483, BW=1934KiB/s (1980kB/s)(1936KiB/1001msec) 00:15:29.252 slat (nsec): min=7320, max=62259, avg=15154.51, stdev=7244.98 00:15:29.252 clat (usec): min=251, max=41692, avg=1822.42, stdev=7519.58 00:15:29.252 lat (usec): min=261, max=41700, avg=1837.58, stdev=7521.21 00:15:29.252 clat percentiles (usec): 00:15:29.252 | 1.00th=[ 260], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 302], 00:15:29.252 | 30.00th=[ 314], 40.00th=[ 330], 50.00th=[ 347], 60.00th=[ 375], 00:15:29.252 | 70.00th=[ 400], 80.00th=[ 416], 90.00th=[ 445], 95.00th=[ 515], 00:15:29.252 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:15:29.252 | 99.99th=[41681] 00:15:29.252 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:15:29.252 slat (nsec): min=6710, max=36660, avg=11919.46, stdev=5603.70 00:15:29.252 clat (usec): min=153, max=294, avg=193.38, stdev=17.97 00:15:29.252 lat (usec): min=161, max=320, avg=205.30, stdev=19.60 00:15:29.252 clat percentiles (usec): 00:15:29.252 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 180], 00:15:29.252 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 196], 00:15:29.252 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 217], 95.00th=[ 225], 00:15:29.252 | 99.00th=[ 239], 99.50th=[ 249], 99.90th=[ 293], 99.95th=[ 293], 00:15:29.252 | 99.99th=[ 293] 00:15:29.252 bw ( KiB/s): min= 4096, max= 4096, per=29.34%, avg=4096.00, stdev= 0.00, samples=1 00:15:29.252 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:29.252 lat (usec) : 250=51.20%, 500=46.18%, 750=0.20%, 1000=0.40% 00:15:29.252 lat (msec) : 2=0.20%, 20=0.10%, 50=1.71% 00:15:29.252 cpu : usr=0.60%, sys=1.80%, ctx=998, majf=0, minf=2 00:15:29.252 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:29.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.252 issued rwts: total=484,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.252 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:29.252 00:15:29.252 Run status group 0 (all jobs): 00:15:29.252 READ: bw=8315KiB/s (8515kB/s), 87.6KiB/s-6402KiB/s (89.8kB/s-6555kB/s), io=8540KiB (8745kB), run=1001-1027msec 00:15:29.252 WRITE: bw=13.6MiB/s (14.3MB/s), 1994KiB/s-8184KiB/s (2042kB/s-8380kB/s), io=14.0MiB (14.7MB), run=1001-1027msec 00:15:29.252 00:15:29.252 Disk stats (read/write): 00:15:29.252 nvme0n1: ios=1538/1536, merge=0/0, ticks=1474/326, in_queue=1800, util=97.70% 00:15:29.252 nvme0n2: ios=42/512, merge=0/0, ticks=1729/94, in_queue=1823, util=97.26% 00:15:29.252 nvme0n3: ios=41/512, merge=0/0, ticks=1642/109, in_queue=1751, util=97.18% 00:15:29.252 nvme0n4: ios=322/512, merge=0/0, ticks=1534/92, in_queue=1626, util=97.16% 00:15:29.252 13:54:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:29.252 [global] 00:15:29.252 thread=1 00:15:29.252 invalidate=1 00:15:29.252 rw=write 00:15:29.252 time_based=1 00:15:29.252 runtime=1 00:15:29.252 ioengine=libaio 00:15:29.252 direct=1 00:15:29.252 bs=4096 00:15:29.252 iodepth=128 00:15:29.252 norandommap=0 00:15:29.252 numjobs=1 00:15:29.252 00:15:29.252 verify_dump=1 00:15:29.252 verify_backlog=512 00:15:29.252 verify_state_save=0 00:15:29.252 do_verify=1 00:15:29.252 verify=crc32c-intel 00:15:29.252 [job0] 00:15:29.252 filename=/dev/nvme0n1 00:15:29.252 [job1] 00:15:29.252 filename=/dev/nvme0n2 00:15:29.252 [job2] 00:15:29.252 filename=/dev/nvme0n3 00:15:29.252 [job3] 00:15:29.252 filename=/dev/nvme0n4 00:15:29.252 Could not set queue depth (nvme0n1) 00:15:29.252 Could not set queue depth (nvme0n2) 00:15:29.252 Could not set queue depth (nvme0n3) 00:15:29.252 Could not set queue depth (nvme0n4) 00:15:29.252 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:29.252 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:29.252 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:29.252 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:29.252 fio-3.35 00:15:29.252 Starting 4 threads 00:15:30.627 00:15:30.627 job0: (groupid=0, jobs=1): err= 0: pid=3741912: Mon Jul 15 13:54:25 2024 00:15:30.627 read: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec) 00:15:30.627 slat (usec): min=2, max=16063, avg=98.91, stdev=703.54 00:15:30.627 clat (usec): min=3376, max=32754, avg=12733.16, stdev=4085.97 00:15:30.627 lat (usec): min=3461, max=36460, avg=12832.06, stdev=4133.55 00:15:30.627 clat percentiles (usec): 00:15:30.627 | 1.00th=[ 4424], 5.00th=[ 7701], 10.00th=[ 8586], 20.00th=[10028], 00:15:30.627 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11863], 60.00th=[12780], 00:15:30.627 | 70.00th=[13566], 80.00th=[15795], 90.00th=[17695], 95.00th=[20841], 00:15:30.627 | 99.00th=[24249], 99.50th=[27395], 99.90th=[29230], 99.95th=[30540], 00:15:30.627 | 99.99th=[32637] 00:15:30.627 write: IOPS=5127, BW=20.0MiB/s (21.0MB/s)(20.2MiB/1010msec); 0 zone resets 00:15:30.627 slat (usec): min=3, max=14901, avg=86.47, stdev=542.73 00:15:30.627 clat (usec): min=396, max=29111, avg=12091.14, stdev=4185.87 00:15:30.627 lat (usec): min=428, max=29141, avg=12177.61, stdev=4213.89 00:15:30.627 clat percentiles (usec): 00:15:30.627 | 1.00th=[ 4883], 5.00th=[ 5735], 10.00th=[ 8029], 20.00th=[ 8848], 00:15:30.627 | 30.00th=[ 9896], 40.00th=[10683], 50.00th=[11207], 60.00th=[11994], 00:15:30.627 | 70.00th=[12649], 80.00th=[15401], 90.00th=[18744], 95.00th=[20055], 00:15:30.627 | 99.00th=[25035], 99.50th=[26870], 99.90th=[27919], 99.95th=[27919], 00:15:30.627 | 99.99th=[29230] 00:15:30.627 bw ( KiB/s): min=18240, max=22720, per=31.37%, avg=20480.00, stdev=3167.84, samples=2 00:15:30.627 iops : min= 4560, max= 5680, avg=5120.00, stdev=791.96, samples=2 00:15:30.627 lat (usec) : 500=0.01%, 1000=0.02% 00:15:30.627 lat (msec) : 2=0.15%, 4=0.25%, 10=25.13%, 20=67.31%, 50=7.14% 00:15:30.627 cpu : usr=4.16%, sys=7.23%, ctx=540, majf=0, minf=1 00:15:30.627 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:30.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.627 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:30.627 issued rwts: total=5120,5179,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.627 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:30.627 job1: (groupid=0, jobs=1): err= 0: pid=3741913: Mon Jul 15 13:54:25 2024 00:15:30.627 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:15:30.627 slat (usec): min=2, max=12408, avg=108.14, stdev=663.88 00:15:30.627 clat (usec): min=1640, max=48162, avg=13630.20, stdev=5095.95 00:15:30.627 lat (usec): min=1645, max=48205, avg=13738.34, stdev=5150.30 00:15:30.627 clat percentiles (usec): 00:15:30.627 | 1.00th=[ 2737], 5.00th=[ 6652], 10.00th=[ 9372], 20.00th=[10683], 00:15:30.627 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12780], 60.00th=[13698], 00:15:30.627 | 70.00th=[14877], 80.00th=[16581], 90.00th=[17695], 95.00th=[21890], 00:15:30.627 | 99.00th=[35914], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:15:30.627 | 99.99th=[47973] 00:15:30.627 write: IOPS=3734, BW=14.6MiB/s (15.3MB/s)(14.7MiB/1006msec); 0 zone resets 00:15:30.627 slat (usec): min=3, max=33986, avg=150.93, stdev=1167.28 00:15:30.627 clat (usec): min=813, max=104226, avg=20172.23, stdev=19641.04 00:15:30.627 lat (usec): min=829, max=104249, avg=20323.17, stdev=19774.57 00:15:30.627 clat percentiles (msec): 00:15:30.627 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 11], 00:15:30.627 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 14], 00:15:30.627 | 70.00th=[ 16], 80.00th=[ 24], 90.00th=[ 44], 95.00th=[ 71], 00:15:30.627 | 99.00th=[ 100], 99.50th=[ 101], 99.90th=[ 102], 99.95th=[ 102], 00:15:30.627 | 99.99th=[ 105] 00:15:30.627 bw ( KiB/s): min= 9168, max=19872, per=22.24%, avg=14520.00, stdev=7568.87, samples=2 00:15:30.627 iops : min= 2292, max= 4968, avg=3630.00, stdev=1892.22, samples=2 00:15:30.627 lat (usec) : 1000=0.04% 00:15:30.627 lat (msec) : 2=0.45%, 4=1.65%, 10=14.70%, 20=66.15%, 50=12.18% 00:15:30.627 lat (msec) : 100=4.44%, 250=0.40% 00:15:30.627 cpu : usr=4.38%, sys=5.87%, ctx=401, majf=0, minf=1 00:15:30.627 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:15:30.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.627 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:30.627 issued rwts: total=3584,3757,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.627 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:30.627 job2: (groupid=0, jobs=1): err= 0: pid=3741914: Mon Jul 15 13:54:25 2024 00:15:30.627 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:15:30.627 slat (usec): min=2, max=17198, avg=118.41, stdev=845.91 00:15:30.627 clat (usec): min=1880, max=60528, avg=17374.58, stdev=8378.31 00:15:30.627 lat (usec): min=1891, max=60532, avg=17492.99, stdev=8425.84 00:15:30.627 clat percentiles (usec): 00:15:30.627 | 1.00th=[ 1909], 5.00th=[ 9765], 10.00th=[11076], 20.00th=[12649], 00:15:30.627 | 30.00th=[13435], 40.00th=[13960], 50.00th=[14484], 60.00th=[15270], 00:15:30.627 | 70.00th=[18744], 80.00th=[24249], 90.00th=[27657], 95.00th=[35390], 00:15:30.627 | 99.00th=[45876], 99.50th=[46400], 99.90th=[60556], 99.95th=[60556], 00:15:30.627 | 99.99th=[60556] 00:15:30.627 write: IOPS=3695, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1003msec); 0 zone resets 00:15:30.627 slat (usec): min=3, max=19939, avg=109.41, stdev=782.63 00:15:30.627 clat (usec): min=705, max=78135, avg=17568.57, stdev=8078.65 00:15:30.627 lat (usec): min=1099, max=78141, avg=17677.97, stdev=8119.01 00:15:30.627 clat percentiles (usec): 00:15:30.627 | 1.00th=[ 4490], 5.00th=[ 7177], 10.00th=[ 9372], 20.00th=[12256], 00:15:30.627 | 30.00th=[13304], 40.00th=[13698], 50.00th=[14484], 60.00th=[17171], 00:15:30.627 | 70.00th=[20055], 80.00th=[22414], 90.00th=[28967], 95.00th=[32375], 00:15:30.627 | 99.00th=[36439], 99.50th=[65799], 99.90th=[68682], 99.95th=[78119], 00:15:30.627 | 99.99th=[78119] 00:15:30.627 bw ( KiB/s): min=12720, max=15960, per=21.96%, avg=14340.00, stdev=2291.03, samples=2 00:15:30.627 iops : min= 3180, max= 3990, avg=3585.00, stdev=572.76, samples=2 00:15:30.627 lat (usec) : 750=0.01% 00:15:30.627 lat (msec) : 2=1.22%, 4=0.86%, 10=6.99%, 20=62.63%, 50=27.71% 00:15:30.627 lat (msec) : 100=0.58% 00:15:30.627 cpu : usr=3.89%, sys=6.49%, ctx=265, majf=0, minf=1 00:15:30.627 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:15:30.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.627 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:30.627 issued rwts: total=3584,3707,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.627 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:30.627 job3: (groupid=0, jobs=1): err= 0: pid=3741915: Mon Jul 15 13:54:25 2024 00:15:30.627 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:15:30.627 slat (usec): min=3, max=24737, avg=137.89, stdev=981.73 00:15:30.627 clat (usec): min=1149, max=55267, avg=17030.68, stdev=9297.31 00:15:30.627 lat (usec): min=1158, max=55282, avg=17168.57, stdev=9355.96 00:15:30.627 clat percentiles (usec): 00:15:30.627 | 1.00th=[ 3818], 5.00th=[ 6915], 10.00th=[10552], 20.00th=[12125], 00:15:30.627 | 30.00th=[12649], 40.00th=[13304], 50.00th=[13829], 60.00th=[14746], 00:15:30.627 | 70.00th=[16450], 80.00th=[21627], 90.00th=[30540], 95.00th=[38011], 00:15:30.627 | 99.00th=[50594], 99.50th=[53216], 99.90th=[55313], 99.95th=[55313], 00:15:30.627 | 99.99th=[55313] 00:15:30.627 write: IOPS=3823, BW=14.9MiB/s (15.7MB/s)(15.0MiB/1005msec); 0 zone resets 00:15:30.627 slat (usec): min=5, max=15373, avg=120.10, stdev=674.32 00:15:30.627 clat (usec): min=1669, max=55285, avg=17272.21, stdev=6728.90 00:15:30.627 lat (usec): min=1716, max=55308, avg=17392.31, stdev=6782.05 00:15:30.627 clat percentiles (usec): 00:15:30.627 | 1.00th=[ 4621], 5.00th=[ 8586], 10.00th=[10683], 20.00th=[12649], 00:15:30.627 | 30.00th=[13173], 40.00th=[13566], 50.00th=[14484], 60.00th=[16909], 00:15:30.627 | 70.00th=[21103], 80.00th=[22938], 90.00th=[26608], 95.00th=[30802], 00:15:30.627 | 99.00th=[34341], 99.50th=[34866], 99.90th=[45351], 99.95th=[55313], 00:15:30.627 | 99.99th=[55313] 00:15:30.627 bw ( KiB/s): min=12288, max=17440, per=22.77%, avg=14864.00, stdev=3643.01, samples=2 00:15:30.627 iops : min= 3072, max= 4360, avg=3716.00, stdev=910.75, samples=2 00:15:30.627 lat (msec) : 2=0.11%, 4=1.10%, 10=6.72%, 20=64.62%, 50=26.92% 00:15:30.627 lat (msec) : 100=0.54% 00:15:30.627 cpu : usr=4.18%, sys=8.17%, ctx=401, majf=0, minf=1 00:15:30.627 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:30.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.627 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:30.627 issued rwts: total=3584,3843,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.627 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:30.627 00:15:30.627 Run status group 0 (all jobs): 00:15:30.627 READ: bw=61.4MiB/s (64.4MB/s), 13.9MiB/s-19.8MiB/s (14.6MB/s-20.8MB/s), io=62.0MiB (65.0MB), run=1003-1010msec 00:15:30.627 WRITE: bw=63.8MiB/s (66.9MB/s), 14.4MiB/s-20.0MiB/s (15.1MB/s-21.0MB/s), io=64.4MiB (67.5MB), run=1003-1010msec 00:15:30.627 00:15:30.627 Disk stats (read/write): 00:15:30.627 nvme0n1: ios=4409/4608, merge=0/0, ticks=29866/32733, in_queue=62599, util=87.17% 00:15:30.628 nvme0n2: ios=2610/2847, merge=0/0, ticks=17321/24079, in_queue=41400, util=95.12% 00:15:30.628 nvme0n3: ios=3122/3286, merge=0/0, ticks=38352/37644, in_queue=75996, util=90.93% 00:15:30.628 nvme0n4: ios=2750/3072, merge=0/0, ticks=40521/50822, in_queue=91343, util=89.58% 00:15:30.628 13:54:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:30.628 [global] 00:15:30.628 thread=1 00:15:30.628 invalidate=1 00:15:30.628 rw=randwrite 00:15:30.628 time_based=1 00:15:30.628 runtime=1 00:15:30.628 ioengine=libaio 00:15:30.628 direct=1 00:15:30.628 bs=4096 00:15:30.628 iodepth=128 00:15:30.628 norandommap=0 00:15:30.628 numjobs=1 00:15:30.628 00:15:30.628 verify_dump=1 00:15:30.628 verify_backlog=512 00:15:30.628 verify_state_save=0 00:15:30.628 do_verify=1 00:15:30.628 verify=crc32c-intel 00:15:30.628 [job0] 00:15:30.628 filename=/dev/nvme0n1 00:15:30.628 [job1] 00:15:30.628 filename=/dev/nvme0n2 00:15:30.628 [job2] 00:15:30.628 filename=/dev/nvme0n3 00:15:30.628 [job3] 00:15:30.628 filename=/dev/nvme0n4 00:15:30.628 Could not set queue depth (nvme0n1) 00:15:30.628 Could not set queue depth (nvme0n2) 00:15:30.628 Could not set queue depth (nvme0n3) 00:15:30.628 Could not set queue depth (nvme0n4) 00:15:30.885 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:30.885 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:30.885 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:30.885 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:30.885 fio-3.35 00:15:30.885 Starting 4 threads 00:15:32.259 00:15:32.259 job0: (groupid=0, jobs=1): err= 0: pid=3742149: Mon Jul 15 13:54:26 2024 00:15:32.259 read: IOPS=2565, BW=10.0MiB/s (10.5MB/s)(10.5MiB/1047msec) 00:15:32.259 slat (usec): min=3, max=11443, avg=129.10, stdev=786.63 00:15:32.259 clat (usec): min=6271, max=78306, avg=16336.50, stdev=12061.67 00:15:32.259 lat (usec): min=6276, max=78314, avg=16465.60, stdev=12113.99 00:15:32.259 clat percentiles (usec): 00:15:32.259 | 1.00th=[ 7242], 5.00th=[10814], 10.00th=[10945], 20.00th=[11207], 00:15:32.259 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11994], 60.00th=[12125], 00:15:32.259 | 70.00th=[13960], 80.00th=[16909], 90.00th=[25822], 95.00th=[32637], 00:15:32.259 | 99.00th=[73925], 99.50th=[76022], 99.90th=[78119], 99.95th=[78119], 00:15:32.259 | 99.99th=[78119] 00:15:32.259 write: IOPS=2934, BW=11.5MiB/s (12.0MB/s)(12.0MiB/1047msec); 0 zone resets 00:15:32.259 slat (usec): min=4, max=27396, avg=207.70, stdev=1147.39 00:15:32.259 clat (usec): min=2614, max=78317, avg=28912.90, stdev=12562.83 00:15:32.259 lat (usec): min=2620, max=78325, avg=29120.60, stdev=12607.04 00:15:32.259 clat percentiles (usec): 00:15:32.259 | 1.00th=[10028], 5.00th=[19268], 10.00th=[20841], 20.00th=[21890], 00:15:32.259 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:15:32.259 | 70.00th=[26346], 80.00th=[37487], 90.00th=[53216], 95.00th=[57934], 00:15:32.259 | 99.00th=[62653], 99.50th=[62653], 99.90th=[63701], 99.95th=[63701], 00:15:32.259 | 99.99th=[78119] 00:15:32.259 bw ( KiB/s): min=12224, max=12336, per=19.03%, avg=12280.00, stdev=79.20, samples=2 00:15:32.259 iops : min= 3056, max= 3084, avg=3070.00, stdev=19.80, samples=2 00:15:32.259 lat (msec) : 4=0.42%, 10=0.68%, 20=42.81%, 50=47.19%, 100=8.91% 00:15:32.259 cpu : usr=2.20%, sys=3.82%, ctx=389, majf=0, minf=19 00:15:32.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:15:32.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:32.259 issued rwts: total=2686,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:32.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:32.259 job1: (groupid=0, jobs=1): err= 0: pid=3742150: Mon Jul 15 13:54:26 2024 00:15:32.259 read: IOPS=5216, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1004msec) 00:15:32.259 slat (usec): min=2, max=40184, avg=82.74, stdev=828.49 00:15:32.259 clat (usec): min=1721, max=57846, avg=12341.10, stdev=8235.22 00:15:32.259 lat (usec): min=1724, max=62851, avg=12423.84, stdev=8271.83 00:15:32.259 clat percentiles (usec): 00:15:32.259 | 1.00th=[ 2704], 5.00th=[ 4817], 10.00th=[ 5604], 20.00th=[ 8356], 00:15:32.259 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10552], 60.00th=[11076], 00:15:32.259 | 70.00th=[11600], 80.00th=[12780], 90.00th=[21627], 95.00th=[30016], 00:15:32.259 | 99.00th=[54264], 99.50th=[54264], 99.90th=[54264], 99.95th=[54264], 00:15:32.259 | 99.99th=[57934] 00:15:32.259 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:15:32.259 slat (usec): min=3, max=16968, avg=77.19, stdev=649.31 00:15:32.259 clat (usec): min=247, max=57130, avg=11172.29, stdev=7013.79 00:15:32.259 lat (usec): min=263, max=57135, avg=11249.48, stdev=7046.95 00:15:32.259 clat percentiles (usec): 00:15:32.259 | 1.00th=[ 545], 5.00th=[ 1827], 10.00th=[ 4752], 20.00th=[ 6587], 00:15:32.259 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:15:32.259 | 70.00th=[11076], 80.00th=[13042], 90.00th=[18220], 95.00th=[21627], 00:15:32.259 | 99.00th=[42206], 99.50th=[50594], 99.90th=[55837], 99.95th=[56886], 00:15:32.259 | 99.99th=[56886] 00:15:32.259 bw ( KiB/s): min=20480, max=24488, per=34.85%, avg=22484.00, stdev=2834.08, samples=2 00:15:32.259 iops : min= 5120, max= 6122, avg=5621.00, stdev=708.52, samples=2 00:15:32.259 lat (usec) : 250=0.01%, 500=0.13%, 750=0.87%, 1000=0.64% 00:15:32.259 lat (msec) : 2=1.14%, 4=2.77%, 10=32.04%, 20=52.90%, 50=8.70% 00:15:32.259 lat (msec) : 100=0.79% 00:15:32.259 cpu : usr=5.38%, sys=6.98%, ctx=455, majf=0, minf=7 00:15:32.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:15:32.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:32.259 issued rwts: total=5237,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:32.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:32.259 job2: (groupid=0, jobs=1): err= 0: pid=3742152: Mon Jul 15 13:54:26 2024 00:15:32.259 read: IOPS=3299, BW=12.9MiB/s (13.5MB/s)(13.0MiB/1010msec) 00:15:32.259 slat (usec): min=2, max=13508, avg=150.70, stdev=991.15 00:15:32.259 clat (usec): min=1374, max=72286, avg=17506.80, stdev=14195.54 00:15:32.259 lat (usec): min=1378, max=72297, avg=17657.50, stdev=14294.32 00:15:32.259 clat percentiles (usec): 00:15:32.259 | 1.00th=[ 3425], 5.00th=[ 7177], 10.00th=[ 9503], 20.00th=[11076], 00:15:32.259 | 30.00th=[12256], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:15:32.259 | 70.00th=[14484], 80.00th=[16450], 90.00th=[35914], 95.00th=[61080], 00:15:32.259 | 99.00th=[67634], 99.50th=[69731], 99.90th=[71828], 99.95th=[71828], 00:15:32.259 | 99.99th=[71828] 00:15:32.259 write: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec); 0 zone resets 00:15:32.259 slat (usec): min=3, max=10112, avg=121.24, stdev=541.98 00:15:32.259 clat (usec): min=391, max=72272, avg=19486.25, stdev=10640.14 00:15:32.260 lat (usec): min=409, max=72295, avg=19607.49, stdev=10678.48 00:15:32.260 clat percentiles (usec): 00:15:32.260 | 1.00th=[ 1942], 5.00th=[ 7242], 10.00th=[ 9372], 20.00th=[10552], 00:15:32.260 | 30.00th=[11994], 40.00th=[14222], 50.00th=[21103], 60.00th=[22676], 00:15:32.260 | 70.00th=[23200], 80.00th=[23462], 90.00th=[24773], 95.00th=[41681], 00:15:32.260 | 99.00th=[58983], 99.50th=[60556], 99.90th=[68682], 99.95th=[71828], 00:15:32.260 | 99.99th=[71828] 00:15:32.260 bw ( KiB/s): min=12192, max=16368, per=22.13%, avg=14280.00, stdev=2952.88, samples=2 00:15:32.260 iops : min= 3048, max= 4092, avg=3570.00, stdev=738.22, samples=2 00:15:32.260 lat (usec) : 500=0.04%, 1000=0.09% 00:15:32.260 lat (msec) : 2=0.75%, 4=0.97%, 10=10.92%, 20=52.49%, 50=29.34% 00:15:32.260 lat (msec) : 100=5.41% 00:15:32.260 cpu : usr=4.16%, sys=6.24%, ctx=409, majf=0, minf=13 00:15:32.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:15:32.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:32.260 issued rwts: total=3332,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:32.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:32.260 job3: (groupid=0, jobs=1): err= 0: pid=3742153: Mon Jul 15 13:54:26 2024 00:15:32.260 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:15:32.260 slat (usec): min=3, max=15315, avg=109.90, stdev=791.43 00:15:32.260 clat (usec): min=5501, max=30591, avg=14107.84, stdev=4038.52 00:15:32.260 lat (usec): min=5506, max=30633, avg=14217.74, stdev=4083.30 00:15:32.260 clat percentiles (usec): 00:15:32.260 | 1.00th=[ 6325], 5.00th=[ 9634], 10.00th=[10683], 20.00th=[11600], 00:15:32.260 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12780], 60.00th=[13566], 00:15:32.260 | 70.00th=[14746], 80.00th=[16581], 90.00th=[20055], 95.00th=[22414], 00:15:32.260 | 99.00th=[27395], 99.50th=[27919], 99.90th=[29230], 99.95th=[29230], 00:15:32.260 | 99.99th=[30540] 00:15:32.260 write: IOPS=4555, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec); 0 zone resets 00:15:32.260 slat (usec): min=4, max=21816, avg=110.01, stdev=720.50 00:15:32.260 clat (usec): min=1873, max=86802, avg=15280.82, stdev=11785.71 00:15:32.260 lat (usec): min=1879, max=86810, avg=15390.83, stdev=11850.68 00:15:32.260 clat percentiles (usec): 00:15:32.260 | 1.00th=[ 4228], 5.00th=[ 6718], 10.00th=[ 8455], 20.00th=[10945], 00:15:32.260 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12649], 60.00th=[12780], 00:15:32.260 | 70.00th=[12911], 80.00th=[14353], 90.00th=[20579], 95.00th=[39060], 00:15:32.260 | 99.00th=[77071], 99.50th=[83362], 99.90th=[86508], 99.95th=[86508], 00:15:32.260 | 99.99th=[86508] 00:15:32.260 bw ( KiB/s): min=16384, max=19400, per=27.73%, avg=17892.00, stdev=2132.63, samples=2 00:15:32.260 iops : min= 4096, max= 4850, avg=4473.00, stdev=533.16, samples=2 00:15:32.260 lat (msec) : 2=0.09%, 4=0.30%, 10=10.75%, 20=77.61%, 50=9.44% 00:15:32.260 lat (msec) : 100=1.81% 00:15:32.260 cpu : usr=5.25%, sys=8.82%, ctx=462, majf=0, minf=13 00:15:32.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:15:32.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:32.260 issued rwts: total=4096,4601,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:32.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:32.260 00:15:32.260 Run status group 0 (all jobs): 00:15:32.260 READ: bw=57.3MiB/s (60.1MB/s), 10.0MiB/s-20.4MiB/s (10.5MB/s-21.4MB/s), io=60.0MiB (62.9MB), run=1004-1047msec 00:15:32.260 WRITE: bw=63.0MiB/s (66.1MB/s), 11.5MiB/s-21.9MiB/s (12.0MB/s-23.0MB/s), io=66.0MiB (69.2MB), run=1004-1047msec 00:15:32.260 00:15:32.260 Disk stats (read/write): 00:15:32.260 nvme0n1: ios=2474/2560, merge=0/0, ticks=33062/71964, in_queue=105026, util=86.07% 00:15:32.260 nvme0n2: ios=4596/4608, merge=0/0, ticks=48454/43069, in_queue=91523, util=90.55% 00:15:32.260 nvme0n3: ios=2618/3047, merge=0/0, ticks=43171/57096, in_queue=100267, util=94.05% 00:15:32.260 nvme0n4: ios=3641/3711, merge=0/0, ticks=48897/54466, in_queue=103363, util=94.32% 00:15:32.260 13:54:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:32.260 13:54:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3742287 00:15:32.260 13:54:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:32.260 13:54:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:32.260 [global] 00:15:32.260 thread=1 00:15:32.260 invalidate=1 00:15:32.260 rw=read 00:15:32.260 time_based=1 00:15:32.260 runtime=10 00:15:32.260 ioengine=libaio 00:15:32.260 direct=1 00:15:32.260 bs=4096 00:15:32.260 iodepth=1 00:15:32.260 norandommap=1 00:15:32.260 numjobs=1 00:15:32.260 00:15:32.260 [job0] 00:15:32.260 filename=/dev/nvme0n1 00:15:32.260 [job1] 00:15:32.260 filename=/dev/nvme0n2 00:15:32.260 [job2] 00:15:32.260 filename=/dev/nvme0n3 00:15:32.260 [job3] 00:15:32.260 filename=/dev/nvme0n4 00:15:32.260 Could not set queue depth (nvme0n1) 00:15:32.260 Could not set queue depth (nvme0n2) 00:15:32.260 Could not set queue depth (nvme0n3) 00:15:32.260 Could not set queue depth (nvme0n4) 00:15:32.260 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:32.260 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:32.260 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:32.260 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:32.260 fio-3.35 00:15:32.260 Starting 4 threads 00:15:35.541 13:54:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:35.541 13:54:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:35.541 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=3813376, buflen=4096 00:15:35.541 fio: pid=3742500, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:35.541 13:54:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:35.541 13:54:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:35.542 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=21929984, buflen=4096 00:15:35.542 fio: pid=3742499, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:35.799 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=16773120, buflen=4096 00:15:35.799 fio: pid=3742465, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:35.799 13:54:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:35.799 13:54:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:36.364 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=16281600, buflen=4096 00:15:36.364 fio: pid=3742486, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:36.364 13:54:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:36.364 13:54:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:36.364 00:15:36.364 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3742465: Mon Jul 15 13:54:30 2024 00:15:36.364 read: IOPS=1190, BW=4760KiB/s (4874kB/s)(16.0MiB/3441msec) 00:15:36.364 slat (usec): min=5, max=11700, avg=15.22, stdev=247.04 00:15:36.364 clat (usec): min=184, max=62912, avg=817.04, stdev=4581.43 00:15:36.364 lat (usec): min=190, max=62947, avg=832.27, stdev=4588.91 00:15:36.364 clat percentiles (usec): 00:15:36.364 | 1.00th=[ 202], 5.00th=[ 217], 10.00th=[ 229], 20.00th=[ 245], 00:15:36.364 | 30.00th=[ 260], 40.00th=[ 277], 50.00th=[ 289], 60.00th=[ 302], 00:15:36.364 | 70.00th=[ 322], 80.00th=[ 355], 90.00th=[ 429], 95.00th=[ 474], 00:15:36.364 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:36.364 | 99.99th=[62653] 00:15:36.364 bw ( KiB/s): min= 96, max=11232, per=29.45%, avg=4501.33, stdev=5123.84, samples=6 00:15:36.364 iops : min= 24, max= 2808, avg=1125.33, stdev=1280.96, samples=6 00:15:36.364 lat (usec) : 250=24.54%, 500=72.27%, 750=1.88%, 1000=0.02% 00:15:36.364 lat (msec) : 2=0.02%, 50=1.22%, 100=0.02% 00:15:36.364 cpu : usr=0.78%, sys=1.80%, ctx=4100, majf=0, minf=1 00:15:36.364 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:36.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.364 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.364 issued rwts: total=4096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:36.364 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:36.364 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3742486: Mon Jul 15 13:54:30 2024 00:15:36.364 read: IOPS=1058, BW=4232KiB/s (4334kB/s)(15.5MiB/3757msec) 00:15:36.364 slat (usec): min=4, max=31470, avg=30.45, stdev=575.87 00:15:36.364 clat (usec): min=200, max=42066, avg=905.65, stdev=4888.44 00:15:36.364 lat (usec): min=211, max=42077, avg=936.11, stdev=4922.19 00:15:36.364 clat percentiles (usec): 00:15:36.364 | 1.00th=[ 212], 5.00th=[ 229], 10.00th=[ 241], 20.00th=[ 258], 00:15:36.364 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 310], 00:15:36.364 | 70.00th=[ 326], 80.00th=[ 351], 90.00th=[ 383], 95.00th=[ 441], 00:15:36.364 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:36.364 | 99.99th=[42206] 00:15:36.364 bw ( KiB/s): min= 96, max=12176, per=26.34%, avg=4025.86, stdev=4757.38, samples=7 00:15:36.364 iops : min= 24, max= 3044, avg=1006.43, stdev=1189.35, samples=7 00:15:36.364 lat (usec) : 250=15.82%, 500=81.49%, 750=1.08%, 1000=0.03% 00:15:36.364 lat (msec) : 2=0.05%, 10=0.03%, 50=1.48% 00:15:36.364 cpu : usr=0.64%, sys=1.68%, ctx=3986, majf=0, minf=1 00:15:36.364 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:36.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.364 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.364 issued rwts: total=3976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:36.364 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:36.364 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3742499: Mon Jul 15 13:54:30 2024 00:15:36.364 read: IOPS=1679, BW=6718KiB/s (6879kB/s)(20.9MiB/3188msec) 00:15:36.364 slat (nsec): min=4643, max=63362, avg=15218.87, stdev=10151.81 00:15:36.364 clat (usec): min=206, max=41345, avg=572.34, stdev=3136.31 00:15:36.364 lat (usec): min=211, max=41353, avg=587.56, stdev=3136.99 00:15:36.364 clat percentiles (usec): 00:15:36.364 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 239], 00:15:36.364 | 30.00th=[ 251], 40.00th=[ 273], 50.00th=[ 289], 60.00th=[ 322], 00:15:36.364 | 70.00th=[ 371], 80.00th=[ 429], 90.00th=[ 494], 95.00th=[ 537], 00:15:36.364 | 99.00th=[ 652], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:36.364 | 99.99th=[41157] 00:15:36.364 bw ( KiB/s): min= 112, max=14008, per=46.67%, avg=7133.33, stdev=5902.55, samples=6 00:15:36.364 iops : min= 28, max= 3502, avg=1783.33, stdev=1475.64, samples=6 00:15:36.364 lat (usec) : 250=29.08%, 500=62.11%, 750=8.18% 00:15:36.364 lat (msec) : 4=0.02%, 50=0.60% 00:15:36.364 cpu : usr=0.75%, sys=3.20%, ctx=5356, majf=0, minf=1 00:15:36.364 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:36.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.364 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.364 issued rwts: total=5355,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:36.364 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:36.364 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3742500: Mon Jul 15 13:54:30 2024 00:15:36.364 read: IOPS=319, BW=1278KiB/s (1308kB/s)(3724KiB/2915msec) 00:15:36.364 slat (nsec): min=5556, max=76520, avg=7322.20, stdev=4441.73 00:15:36.364 clat (usec): min=190, max=41143, avg=3097.52, stdev=10401.39 00:15:36.364 lat (usec): min=196, max=41158, avg=3104.84, stdev=10404.41 00:15:36.364 clat percentiles (usec): 00:15:36.364 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 206], 00:15:36.364 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 227], 00:15:36.364 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 258], 95.00th=[41157], 00:15:36.364 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:36.364 | 99.99th=[41157] 00:15:36.364 bw ( KiB/s): min= 96, max= 6920, per=9.58%, avg=1464.00, stdev=3050.00, samples=5 00:15:36.364 iops : min= 24, max= 1730, avg=366.00, stdev=762.50, samples=5 00:15:36.364 lat (usec) : 250=86.70%, 500=6.01% 00:15:36.364 lat (msec) : 20=0.21%, 50=6.97% 00:15:36.364 cpu : usr=0.07%, sys=0.41%, ctx=933, majf=0, minf=1 00:15:36.364 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:36.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.364 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.364 issued rwts: total=932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:36.364 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:36.364 00:15:36.364 Run status group 0 (all jobs): 00:15:36.364 READ: bw=14.9MiB/s (15.7MB/s), 1278KiB/s-6718KiB/s (1308kB/s-6879kB/s), io=56.1MiB (58.8MB), run=2915-3757msec 00:15:36.364 00:15:36.364 Disk stats (read/write): 00:15:36.364 nvme0n1: ios=3904/0, merge=0/0, ticks=3249/0, in_queue=3249, util=95.77% 00:15:36.364 nvme0n2: ios=3652/0, merge=0/0, ticks=4405/0, in_queue=4405, util=97.64% 00:15:36.364 nvme0n3: ios=5397/0, merge=0/0, ticks=4013/0, in_queue=4013, util=98.88% 00:15:36.364 nvme0n4: ios=926/0, merge=0/0, ticks=2826/0, in_queue=2826, util=96.71% 00:15:36.364 13:54:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:36.364 13:54:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:36.622 13:54:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:36.622 13:54:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:37.186 13:54:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:37.186 13:54:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:37.186 13:54:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:37.186 13:54:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:37.445 13:54:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:37.445 13:54:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3742287 00:15:37.445 13:54:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:37.445 13:54:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:37.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.703 13:54:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:37.703 13:54:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:15:37.703 13:54:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:37.703 13:54:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:37.703 13:54:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:37.703 13:54:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:37.703 13:54:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:15:37.703 13:54:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:37.703 13:54:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:37.703 nvmf hotplug test: fio failed as expected 00:15:37.703 13:54:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:37.963 13:54:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:37.963 13:54:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:37.963 13:54:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:37.963 13:54:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:37.963 13:54:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:37.963 13:54:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:37.963 13:54:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:15:37.963 13:54:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:37.963 13:54:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:15:37.963 13:54:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:37.963 13:54:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:37.963 rmmod nvme_tcp 00:15:37.963 rmmod nvme_fabrics 00:15:37.963 rmmod nvme_keyring 00:15:37.963 13:54:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:37.963 13:54:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:15:37.963 13:54:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:15:37.963 13:54:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3740244 ']' 00:15:37.963 13:54:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3740244 00:15:37.963 13:54:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 3740244 ']' 00:15:37.963 13:54:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 3740244 00:15:37.963 13:54:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:15:37.963 13:54:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:37.963 13:54:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3740244 00:15:37.963 13:54:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:37.963 13:54:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:37.963 13:54:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3740244' 00:15:37.963 killing process with pid 3740244 00:15:37.963 13:54:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 3740244 00:15:37.963 13:54:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 3740244 00:15:38.221 13:54:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:38.221 13:54:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:38.221 13:54:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:38.221 13:54:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:38.221 13:54:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:38.221 13:54:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.221 13:54:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:38.221 13:54:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.755 13:54:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:40.755 00:15:40.755 real 0m23.967s 00:15:40.755 user 1m25.158s 00:15:40.755 sys 0m6.364s 00:15:40.755 13:54:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:40.755 13:54:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.755 ************************************ 00:15:40.755 END TEST nvmf_fio_target 00:15:40.755 ************************************ 00:15:40.755 13:54:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:40.755 13:54:35 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:40.755 13:54:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:40.755 13:54:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:40.755 13:54:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:40.755 ************************************ 00:15:40.755 START TEST nvmf_bdevio 00:15:40.755 ************************************ 00:15:40.755 13:54:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:40.755 * Looking for test storage... 00:15:40.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:40.755 13:54:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:40.755 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:40.755 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.755 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.755 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.755 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.755 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.755 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.755 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.755 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.755 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.755 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.755 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:40.755 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:40.755 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.755 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.755 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:40.755 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:40.755 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:40.755 13:54:35 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.755 13:54:35 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.755 13:54:35 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.755 13:54:35 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.756 13:54:35 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.756 13:54:35 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.756 13:54:35 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:40.756 13:54:35 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.756 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:15:40.756 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:40.756 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:40.756 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:40.756 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.756 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.756 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:40.756 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:40.756 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:40.756 13:54:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:40.756 13:54:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:40.756 13:54:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:40.756 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:40.756 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:40.756 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:40.756 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:40.756 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:40.756 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.756 13:54:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:40.756 13:54:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.756 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:40.756 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:40.756 13:54:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:15:40.756 13:54:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:42.659 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:42.659 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:42.659 Found net devices under 0000:84:00.0: cvl_0_0 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:42.659 Found net devices under 0000:84:00.1: cvl_0_1 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:42.659 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:42.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:42.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:15:42.660 00:15:42.660 --- 10.0.0.2 ping statistics --- 00:15:42.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.660 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:42.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:42.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:15:42.660 00:15:42.660 --- 10.0.0.1 ping statistics --- 00:15:42.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.660 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3745130 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3745130 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 3745130 ']' 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:42.660 13:54:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:42.660 [2024-07-15 13:54:37.364452] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:15:42.660 [2024-07-15 13:54:37.364534] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.660 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.660 [2024-07-15 13:54:37.427720] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:42.917 [2024-07-15 13:54:37.533785] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.917 [2024-07-15 13:54:37.533832] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.917 [2024-07-15 13:54:37.533861] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:42.917 [2024-07-15 13:54:37.533873] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:42.917 [2024-07-15 13:54:37.533883] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.917 [2024-07-15 13:54:37.534255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:42.918 [2024-07-15 13:54:37.534310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:42.918 [2024-07-15 13:54:37.534327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:42.918 [2024-07-15 13:54:37.534329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:42.918 [2024-07-15 13:54:37.673375] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:42.918 Malloc0 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:42.918 [2024-07-15 13:54:37.724502] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:42.918 { 00:15:42.918 "params": { 00:15:42.918 "name": "Nvme$subsystem", 00:15:42.918 "trtype": "$TEST_TRANSPORT", 00:15:42.918 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:42.918 "adrfam": "ipv4", 00:15:42.918 "trsvcid": "$NVMF_PORT", 00:15:42.918 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:42.918 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:42.918 "hdgst": ${hdgst:-false}, 00:15:42.918 "ddgst": ${ddgst:-false} 00:15:42.918 }, 00:15:42.918 "method": "bdev_nvme_attach_controller" 00:15:42.918 } 00:15:42.918 EOF 00:15:42.918 )") 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:15:42.918 13:54:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:42.918 "params": { 00:15:42.918 "name": "Nvme1", 00:15:42.918 "trtype": "tcp", 00:15:42.918 "traddr": "10.0.0.2", 00:15:42.918 "adrfam": "ipv4", 00:15:42.918 "trsvcid": "4420", 00:15:42.918 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:42.918 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:42.918 "hdgst": false, 00:15:42.918 "ddgst": false 00:15:42.918 }, 00:15:42.918 "method": "bdev_nvme_attach_controller" 00:15:42.918 }' 00:15:43.177 [2024-07-15 13:54:37.768403] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:15:43.177 [2024-07-15 13:54:37.768488] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3745165 ] 00:15:43.177 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.177 [2024-07-15 13:54:37.829116] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:43.177 [2024-07-15 13:54:37.941847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.177 [2024-07-15 13:54:37.941897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:43.177 [2024-07-15 13:54:37.941900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.436 I/O targets: 00:15:43.436 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:43.436 00:15:43.436 00:15:43.436 CUnit - A unit testing framework for C - Version 2.1-3 00:15:43.436 http://cunit.sourceforge.net/ 00:15:43.436 00:15:43.436 00:15:43.436 Suite: bdevio tests on: Nvme1n1 00:15:43.436 Test: blockdev write read block ...passed 00:15:43.436 Test: blockdev write zeroes read block ...passed 00:15:43.436 Test: blockdev write zeroes read no split ...passed 00:15:43.436 Test: blockdev write zeroes read split ...passed 00:15:43.696 Test: blockdev write zeroes read split partial ...passed 00:15:43.696 Test: blockdev reset ...[2024-07-15 13:54:38.327682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:43.696 [2024-07-15 13:54:38.327810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1384bd0 (9): Bad file descriptor 00:15:43.696 [2024-07-15 13:54:38.396571] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:43.696 passed 00:15:43.696 Test: blockdev write read 8 blocks ...passed 00:15:43.696 Test: blockdev write read size > 128k ...passed 00:15:43.696 Test: blockdev write read invalid size ...passed 00:15:43.696 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:43.696 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:43.696 Test: blockdev write read max offset ...passed 00:15:43.956 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:43.956 Test: blockdev writev readv 8 blocks ...passed 00:15:43.956 Test: blockdev writev readv 30 x 1block ...passed 00:15:43.956 Test: blockdev writev readv block ...passed 00:15:43.956 Test: blockdev writev readv size > 128k ...passed 00:15:43.956 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:43.956 Test: blockdev comparev and writev ...[2024-07-15 13:54:38.654159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:43.956 [2024-07-15 13:54:38.654195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:43.956 [2024-07-15 13:54:38.654219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:43.956 [2024-07-15 13:54:38.654237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:43.956 [2024-07-15 13:54:38.654681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:43.956 [2024-07-15 13:54:38.654711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:43.956 [2024-07-15 13:54:38.654735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:43.956 [2024-07-15 13:54:38.654760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:43.956 [2024-07-15 13:54:38.655229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:43.956 [2024-07-15 13:54:38.655252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:43.956 [2024-07-15 13:54:38.655273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:43.956 [2024-07-15 13:54:38.655288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:43.956 [2024-07-15 13:54:38.655732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:43.956 [2024-07-15 13:54:38.655762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:43.956 [2024-07-15 13:54:38.655784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:43.956 [2024-07-15 13:54:38.655800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:43.956 passed 00:15:43.956 Test: blockdev nvme passthru rw ...passed 00:15:43.956 Test: blockdev nvme passthru vendor specific ...[2024-07-15 13:54:38.740151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:43.956 [2024-07-15 13:54:38.740176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:43.956 [2024-07-15 13:54:38.740461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:43.956 [2024-07-15 13:54:38.740484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:43.956 [2024-07-15 13:54:38.740778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:43.956 [2024-07-15 13:54:38.740801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:43.956 [2024-07-15 13:54:38.741123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:43.956 [2024-07-15 13:54:38.741145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:43.956 passed 00:15:43.956 Test: blockdev nvme admin passthru ...passed 00:15:44.215 Test: blockdev copy ...passed 00:15:44.215 00:15:44.215 Run Summary: Type Total Ran Passed Failed Inactive 00:15:44.215 suites 1 1 n/a 0 0 00:15:44.215 tests 23 23 23 0 0 00:15:44.215 asserts 152 152 152 0 n/a 00:15:44.215 00:15:44.215 Elapsed time = 1.349 seconds 00:15:44.215 13:54:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:44.215 13:54:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.215 13:54:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:44.215 13:54:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.215 13:54:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:44.215 13:54:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:44.215 13:54:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:44.215 13:54:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:15:44.215 13:54:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:44.215 13:54:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:15:44.215 13:54:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:44.215 13:54:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:44.473 rmmod nvme_tcp 00:15:44.473 rmmod nvme_fabrics 00:15:44.473 rmmod nvme_keyring 00:15:44.473 13:54:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:44.473 13:54:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:15:44.473 13:54:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:15:44.473 13:54:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3745130 ']' 00:15:44.473 13:54:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3745130 00:15:44.473 13:54:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 3745130 ']' 00:15:44.473 13:54:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 3745130 00:15:44.473 13:54:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:15:44.473 13:54:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:44.473 13:54:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3745130 00:15:44.473 13:54:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:15:44.473 13:54:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:15:44.473 13:54:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3745130' 00:15:44.473 killing process with pid 3745130 00:15:44.473 13:54:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 3745130 00:15:44.473 13:54:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 3745130 00:15:44.750 13:54:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:44.750 13:54:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:44.750 13:54:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:44.750 13:54:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:44.750 13:54:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:44.750 13:54:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.750 13:54:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:44.750 13:54:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.668 13:54:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:46.668 00:15:46.668 real 0m6.364s 00:15:46.668 user 0m10.325s 00:15:46.668 sys 0m2.098s 00:15:46.668 13:54:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:46.668 13:54:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:46.668 ************************************ 00:15:46.668 END TEST nvmf_bdevio 00:15:46.668 ************************************ 00:15:46.668 13:54:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:46.668 13:54:41 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:46.668 13:54:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:46.668 13:54:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:46.668 13:54:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:46.668 ************************************ 00:15:46.668 START TEST nvmf_auth_target 00:15:46.668 ************************************ 00:15:46.668 13:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:46.927 * Looking for test storage... 00:15:46.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:46.927 13:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:49.460 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:49.460 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:49.460 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:49.461 Found net devices under 0000:84:00.0: cvl_0_0 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:49.461 Found net devices under 0000:84:00.1: cvl_0_1 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:49.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:15:49.461 00:15:49.461 --- 10.0.0.2 ping statistics --- 00:15:49.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.461 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:49.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:15:49.461 00:15:49.461 --- 10.0.0.1 ping statistics --- 00:15:49.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.461 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3747258 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3747258 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3747258 ']' 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:49.461 13:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.461 13:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:49.461 13:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:49.461 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:49.461 13:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:49.461 13:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.461 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.461 13:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3747396 00:15:49.461 13:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:49.461 13:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:49.461 13:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:15:49.461 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:49.461 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.461 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:49.461 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:15:49.461 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:49.461 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:49.461 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=775f2503554fb5c02e900e013e5736a6d301479e4a24df72 00:15:49.461 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:49.461 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.dOk 00:15:49.461 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 775f2503554fb5c02e900e013e5736a6d301479e4a24df72 0 00:15:49.461 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 775f2503554fb5c02e900e013e5736a6d301479e4a24df72 0 00:15:49.461 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:49.461 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:49.461 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=775f2503554fb5c02e900e013e5736a6d301479e4a24df72 00:15:49.461 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:15:49.461 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:49.719 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.dOk 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.dOk 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.dOk 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2bebda6c4a1d91c985fa096fa319b37a3aacab051f1105b171b6b22e792b0ab9 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.KVA 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2bebda6c4a1d91c985fa096fa319b37a3aacab051f1105b171b6b22e792b0ab9 3 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2bebda6c4a1d91c985fa096fa319b37a3aacab051f1105b171b6b22e792b0ab9 3 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2bebda6c4a1d91c985fa096fa319b37a3aacab051f1105b171b6b22e792b0ab9 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.KVA 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.KVA 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.KVA 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a25113f3461d590b5067989f93af7751 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.4VU 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a25113f3461d590b5067989f93af7751 1 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a25113f3461d590b5067989f93af7751 1 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a25113f3461d590b5067989f93af7751 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.4VU 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.4VU 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.4VU 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=631a11f546d559ab70d8d879597f2599bf2eeaf57a67ebde 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.MYz 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 631a11f546d559ab70d8d879597f2599bf2eeaf57a67ebde 2 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 631a11f546d559ab70d8d879597f2599bf2eeaf57a67ebde 2 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=631a11f546d559ab70d8d879597f2599bf2eeaf57a67ebde 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.MYz 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.MYz 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.MYz 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9da2f76d1bafe7ed4e99387c7ba428dea2ae7b617258c430 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.aI5 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9da2f76d1bafe7ed4e99387c7ba428dea2ae7b617258c430 2 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9da2f76d1bafe7ed4e99387c7ba428dea2ae7b617258c430 2 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9da2f76d1bafe7ed4e99387c7ba428dea2ae7b617258c430 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.aI5 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.aI5 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.aI5 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1245a8c13ce3be55ea6ac5e0a555b73d 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.d0r 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1245a8c13ce3be55ea6ac5e0a555b73d 1 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1245a8c13ce3be55ea6ac5e0a555b73d 1 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1245a8c13ce3be55ea6ac5e0a555b73d 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:49.720 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.d0r 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.d0r 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.d0r 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=09afe8e4be7030b0bbe2a5e9ed202f519424fd43cce9163d02ec86171cb59096 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.xHy 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 09afe8e4be7030b0bbe2a5e9ed202f519424fd43cce9163d02ec86171cb59096 3 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 09afe8e4be7030b0bbe2a5e9ed202f519424fd43cce9163d02ec86171cb59096 3 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=09afe8e4be7030b0bbe2a5e9ed202f519424fd43cce9163d02ec86171cb59096 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.xHy 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.xHy 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.xHy 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3747258 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3747258 ']' 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:49.977 13:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.234 13:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:50.234 13:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:50.234 13:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3747396 /var/tmp/host.sock 00:15:50.234 13:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3747396 ']' 00:15:50.234 13:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:15:50.234 13:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:50.234 13:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:50.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:50.234 13:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:50.234 13:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.491 13:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:50.491 13:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:50.491 13:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:15:50.491 13:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.491 13:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.491 13:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.491 13:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:50.491 13:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.dOk 00:15:50.491 13:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.491 13:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.491 13:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.491 13:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.dOk 00:15:50.491 13:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.dOk 00:15:50.748 13:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.KVA ]] 00:15:50.748 13:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.KVA 00:15:50.748 13:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.748 13:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.748 13:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.748 13:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.KVA 00:15:50.748 13:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.KVA 00:15:51.006 13:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:51.006 13:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.4VU 00:15:51.006 13:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.006 13:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.006 13:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.006 13:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.4VU 00:15:51.006 13:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.4VU 00:15:51.263 13:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.MYz ]] 00:15:51.263 13:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MYz 00:15:51.263 13:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.263 13:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.263 13:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.263 13:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MYz 00:15:51.263 13:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MYz 00:15:51.520 13:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:51.520 13:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.aI5 00:15:51.520 13:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.520 13:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.520 13:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.520 13:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.aI5 00:15:51.520 13:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.aI5 00:15:51.777 13:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.d0r ]] 00:15:51.777 13:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.d0r 00:15:51.777 13:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.777 13:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.777 13:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.777 13:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.d0r 00:15:51.777 13:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.d0r 00:15:52.035 13:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:52.035 13:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.xHy 00:15:52.035 13:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.035 13:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.035 13:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.035 13:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.xHy 00:15:52.035 13:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.xHy 00:15:52.292 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:15:52.292 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:52.292 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:52.292 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:52.292 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:52.292 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:52.549 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:15:52.550 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:52.550 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:52.550 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:52.550 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:52.550 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.550 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.550 13:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.550 13:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.550 13:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.550 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.550 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.809 00:15:53.068 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:53.068 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:53.068 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.068 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.068 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.068 13:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.068 13:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.327 13:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.327 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:53.327 { 00:15:53.327 "cntlid": 1, 00:15:53.327 "qid": 0, 00:15:53.327 "state": "enabled", 00:15:53.327 "thread": "nvmf_tgt_poll_group_000", 00:15:53.327 "listen_address": { 00:15:53.327 "trtype": "TCP", 00:15:53.327 "adrfam": "IPv4", 00:15:53.327 "traddr": "10.0.0.2", 00:15:53.327 "trsvcid": "4420" 00:15:53.327 }, 00:15:53.327 "peer_address": { 00:15:53.327 "trtype": "TCP", 00:15:53.327 "adrfam": "IPv4", 00:15:53.327 "traddr": "10.0.0.1", 00:15:53.327 "trsvcid": "52384" 00:15:53.327 }, 00:15:53.327 "auth": { 00:15:53.327 "state": "completed", 00:15:53.327 "digest": "sha256", 00:15:53.327 "dhgroup": "null" 00:15:53.327 } 00:15:53.327 } 00:15:53.327 ]' 00:15:53.327 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:53.327 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:53.327 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:53.327 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:53.327 13:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:53.327 13:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.327 13:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.327 13:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.585 13:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Nzc1ZjI1MDM1NTRmYjVjMDJlOTAwZTAxM2U1NzM2YTZkMzAxNDc5ZTRhMjRkZjcyhKo8vA==: --dhchap-ctrl-secret DHHC-1:03:MmJlYmRhNmM0YTFkOTFjOTg1ZmEwOTZmYTMxOWIzN2EzYWFjYWIwNTFmMTEwNWIxNzFiNmIyMmU3OTJiMGFiOfuiRg8=: 00:15:54.521 13:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.521 13:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:54.521 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.521 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.521 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.521 13:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:54.521 13:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:54.521 13:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:54.779 13:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:15:54.779 13:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:54.779 13:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:54.779 13:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:54.779 13:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:54.779 13:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.779 13:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.779 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.779 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.779 13:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.779 13:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.779 13:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.037 00:15:55.037 13:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:55.037 13:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:55.037 13:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.296 13:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.296 13:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.296 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.296 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.296 13:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.296 13:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.296 { 00:15:55.296 "cntlid": 3, 00:15:55.296 "qid": 0, 00:15:55.296 "state": "enabled", 00:15:55.296 "thread": "nvmf_tgt_poll_group_000", 00:15:55.296 "listen_address": { 00:15:55.296 "trtype": "TCP", 00:15:55.296 "adrfam": "IPv4", 00:15:55.296 "traddr": "10.0.0.2", 00:15:55.296 "trsvcid": "4420" 00:15:55.296 }, 00:15:55.296 "peer_address": { 00:15:55.296 "trtype": "TCP", 00:15:55.296 "adrfam": "IPv4", 00:15:55.296 "traddr": "10.0.0.1", 00:15:55.296 "trsvcid": "52394" 00:15:55.296 }, 00:15:55.296 "auth": { 00:15:55.296 "state": "completed", 00:15:55.296 "digest": "sha256", 00:15:55.296 "dhgroup": "null" 00:15:55.296 } 00:15:55.296 } 00:15:55.296 ]' 00:15:55.296 13:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.554 13:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:55.554 13:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.554 13:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:55.554 13:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.554 13:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.554 13:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.554 13:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.811 13:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YTI1MTEzZjM0NjFkNTkwYjUwNjc5ODlmOTNhZjc3NTFLAk7g: --dhchap-ctrl-secret DHHC-1:02:NjMxYTExZjU0NmQ1NTlhYjcwZDhkODc5NTk3ZjI1OTliZjJlZWFmNTdhNjdlYmRlkGfA3A==: 00:15:56.742 13:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.742 13:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:56.742 13:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.742 13:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.742 13:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.742 13:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:56.742 13:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:56.742 13:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:56.999 13:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:15:56.999 13:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:57.000 13:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:57.000 13:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:57.000 13:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:57.000 13:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.000 13:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.000 13:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.000 13:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.000 13:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.000 13:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.000 13:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.257 00:15:57.257 13:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.257 13:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:57.257 13:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.514 13:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.514 13:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.514 13:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.514 13:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.514 13:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.514 13:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:57.514 { 00:15:57.514 "cntlid": 5, 00:15:57.514 "qid": 0, 00:15:57.514 "state": "enabled", 00:15:57.514 "thread": "nvmf_tgt_poll_group_000", 00:15:57.514 "listen_address": { 00:15:57.514 "trtype": "TCP", 00:15:57.514 "adrfam": "IPv4", 00:15:57.514 "traddr": "10.0.0.2", 00:15:57.514 "trsvcid": "4420" 00:15:57.514 }, 00:15:57.514 "peer_address": { 00:15:57.514 "trtype": "TCP", 00:15:57.514 "adrfam": "IPv4", 00:15:57.514 "traddr": "10.0.0.1", 00:15:57.514 "trsvcid": "33594" 00:15:57.514 }, 00:15:57.514 "auth": { 00:15:57.514 "state": "completed", 00:15:57.514 "digest": "sha256", 00:15:57.514 "dhgroup": "null" 00:15:57.514 } 00:15:57.514 } 00:15:57.514 ]' 00:15:57.514 13:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:57.514 13:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:57.514 13:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:57.514 13:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:57.514 13:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:57.514 13:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.514 13:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.514 13:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.772 13:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OWRhMmY3NmQxYmFmZTdlZDRlOTkzODdjN2JhNDI4ZGVhMmFlN2I2MTcyNThjNDMw/g+/BQ==: --dhchap-ctrl-secret DHHC-1:01:MTI0NWE4YzEzY2UzYmU1NWVhNmFjNWUwYTU1NWI3M2TBB42v: 00:15:58.705 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.705 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:58.705 13:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.705 13:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.705 13:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.705 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:58.705 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:58.705 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:58.964 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:15:58.964 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:58.964 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:58.964 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:58.964 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:58.964 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.964 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:15:58.964 13:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.964 13:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.964 13:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.964 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:58.964 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:59.222 00:15:59.222 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:59.222 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:59.222 13:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.480 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.480 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.480 13:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.480 13:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.480 13:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.480 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:59.480 { 00:15:59.480 "cntlid": 7, 00:15:59.480 "qid": 0, 00:15:59.480 "state": "enabled", 00:15:59.480 "thread": "nvmf_tgt_poll_group_000", 00:15:59.480 "listen_address": { 00:15:59.480 "trtype": "TCP", 00:15:59.480 "adrfam": "IPv4", 00:15:59.480 "traddr": "10.0.0.2", 00:15:59.480 "trsvcid": "4420" 00:15:59.480 }, 00:15:59.480 "peer_address": { 00:15:59.480 "trtype": "TCP", 00:15:59.480 "adrfam": "IPv4", 00:15:59.480 "traddr": "10.0.0.1", 00:15:59.480 "trsvcid": "33626" 00:15:59.480 }, 00:15:59.480 "auth": { 00:15:59.480 "state": "completed", 00:15:59.480 "digest": "sha256", 00:15:59.480 "dhgroup": "null" 00:15:59.480 } 00:15:59.480 } 00:15:59.480 ]' 00:15:59.480 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:59.480 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:59.480 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:59.738 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:59.738 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:59.738 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.738 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.738 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.996 13:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MDlhZmU4ZTRiZTcwMzBiMGJiZTJhNWU5ZWQyMDJmNTE5NDI0ZmQ0M2NjZTkxNjNkMDJlYzg2MTcxY2I1OTA5NrD6o+E=: 00:16:00.931 13:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.931 13:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:00.931 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.931 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.931 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.931 13:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:00.931 13:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:00.931 13:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:00.931 13:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:01.189 13:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:01.189 13:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:01.189 13:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:01.189 13:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:01.189 13:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:01.189 13:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.189 13:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.189 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.189 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.189 13:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.189 13:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.189 13:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.447 00:16:01.447 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.447 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.447 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.705 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.705 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.705 13:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.705 13:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.705 13:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.705 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.705 { 00:16:01.705 "cntlid": 9, 00:16:01.705 "qid": 0, 00:16:01.705 "state": "enabled", 00:16:01.705 "thread": "nvmf_tgt_poll_group_000", 00:16:01.705 "listen_address": { 00:16:01.705 "trtype": "TCP", 00:16:01.705 "adrfam": "IPv4", 00:16:01.705 "traddr": "10.0.0.2", 00:16:01.705 "trsvcid": "4420" 00:16:01.705 }, 00:16:01.705 "peer_address": { 00:16:01.705 "trtype": "TCP", 00:16:01.705 "adrfam": "IPv4", 00:16:01.705 "traddr": "10.0.0.1", 00:16:01.705 "trsvcid": "33658" 00:16:01.705 }, 00:16:01.705 "auth": { 00:16:01.705 "state": "completed", 00:16:01.705 "digest": "sha256", 00:16:01.705 "dhgroup": "ffdhe2048" 00:16:01.705 } 00:16:01.705 } 00:16:01.705 ]' 00:16:01.705 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.705 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.705 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.705 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:01.705 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.964 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.964 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.964 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.222 13:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Nzc1ZjI1MDM1NTRmYjVjMDJlOTAwZTAxM2U1NzM2YTZkMzAxNDc5ZTRhMjRkZjcyhKo8vA==: --dhchap-ctrl-secret DHHC-1:03:MmJlYmRhNmM0YTFkOTFjOTg1ZmEwOTZmYTMxOWIzN2EzYWFjYWIwNTFmMTEwNWIxNzFiNmIyMmU3OTJiMGFiOfuiRg8=: 00:16:03.158 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.158 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:03.158 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.158 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.158 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.158 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:03.158 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:03.158 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:03.158 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:03.158 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:03.158 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:03.158 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:03.158 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:03.158 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.158 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.158 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.158 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.417 13:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.417 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.417 13:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.674 00:16:03.674 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:03.674 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.674 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.932 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.932 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.932 13:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.932 13:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.932 13:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.932 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.932 { 00:16:03.932 "cntlid": 11, 00:16:03.932 "qid": 0, 00:16:03.932 "state": "enabled", 00:16:03.932 "thread": "nvmf_tgt_poll_group_000", 00:16:03.932 "listen_address": { 00:16:03.932 "trtype": "TCP", 00:16:03.932 "adrfam": "IPv4", 00:16:03.932 "traddr": "10.0.0.2", 00:16:03.932 "trsvcid": "4420" 00:16:03.932 }, 00:16:03.932 "peer_address": { 00:16:03.932 "trtype": "TCP", 00:16:03.932 "adrfam": "IPv4", 00:16:03.932 "traddr": "10.0.0.1", 00:16:03.932 "trsvcid": "33686" 00:16:03.932 }, 00:16:03.932 "auth": { 00:16:03.932 "state": "completed", 00:16:03.932 "digest": "sha256", 00:16:03.932 "dhgroup": "ffdhe2048" 00:16:03.932 } 00:16:03.932 } 00:16:03.932 ]' 00:16:03.932 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.932 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.932 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:03.932 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:03.932 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.932 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.932 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.932 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.191 13:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YTI1MTEzZjM0NjFkNTkwYjUwNjc5ODlmOTNhZjc3NTFLAk7g: --dhchap-ctrl-secret DHHC-1:02:NjMxYTExZjU0NmQ1NTlhYjcwZDhkODc5NTk3ZjI1OTliZjJlZWFmNTdhNjdlYmRlkGfA3A==: 00:16:05.128 13:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.128 13:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:05.128 13:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.128 13:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.128 13:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.128 13:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:05.128 13:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:05.128 13:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:05.385 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:05.385 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:05.385 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:05.385 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:05.385 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:05.385 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.385 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.385 13:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.385 13:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.385 13:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.385 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.385 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.643 00:16:05.643 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:05.643 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:05.643 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.900 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.900 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.900 13:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.900 13:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.900 13:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.900 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:05.900 { 00:16:05.900 "cntlid": 13, 00:16:05.900 "qid": 0, 00:16:05.900 "state": "enabled", 00:16:05.900 "thread": "nvmf_tgt_poll_group_000", 00:16:05.900 "listen_address": { 00:16:05.900 "trtype": "TCP", 00:16:05.900 "adrfam": "IPv4", 00:16:05.900 "traddr": "10.0.0.2", 00:16:05.900 "trsvcid": "4420" 00:16:05.900 }, 00:16:05.900 "peer_address": { 00:16:05.900 "trtype": "TCP", 00:16:05.900 "adrfam": "IPv4", 00:16:05.900 "traddr": "10.0.0.1", 00:16:05.900 "trsvcid": "33708" 00:16:05.900 }, 00:16:05.900 "auth": { 00:16:05.900 "state": "completed", 00:16:05.900 "digest": "sha256", 00:16:05.900 "dhgroup": "ffdhe2048" 00:16:05.900 } 00:16:05.900 } 00:16:05.900 ]' 00:16:05.900 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:05.901 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:05.901 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:05.901 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:05.901 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:06.160 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.160 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.160 13:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.421 13:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OWRhMmY3NmQxYmFmZTdlZDRlOTkzODdjN2JhNDI4ZGVhMmFlN2I2MTcyNThjNDMw/g+/BQ==: --dhchap-ctrl-secret DHHC-1:01:MTI0NWE4YzEzY2UzYmU1NWVhNmFjNWUwYTU1NWI3M2TBB42v: 00:16:07.408 13:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.408 13:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:07.408 13:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.408 13:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.408 13:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.408 13:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.408 13:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:07.408 13:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:07.408 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:07.408 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:07.408 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:07.408 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:07.408 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:07.408 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.408 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:07.408 13:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.408 13:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.408 13:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.408 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:07.408 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:07.667 00:16:07.926 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:07.926 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:07.926 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.926 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.926 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.926 13:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.926 13:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.184 13:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.184 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:08.184 { 00:16:08.184 "cntlid": 15, 00:16:08.184 "qid": 0, 00:16:08.184 "state": "enabled", 00:16:08.184 "thread": "nvmf_tgt_poll_group_000", 00:16:08.184 "listen_address": { 00:16:08.184 "trtype": "TCP", 00:16:08.184 "adrfam": "IPv4", 00:16:08.184 "traddr": "10.0.0.2", 00:16:08.184 "trsvcid": "4420" 00:16:08.184 }, 00:16:08.184 "peer_address": { 00:16:08.184 "trtype": "TCP", 00:16:08.184 "adrfam": "IPv4", 00:16:08.184 "traddr": "10.0.0.1", 00:16:08.184 "trsvcid": "34508" 00:16:08.184 }, 00:16:08.184 "auth": { 00:16:08.184 "state": "completed", 00:16:08.184 "digest": "sha256", 00:16:08.184 "dhgroup": "ffdhe2048" 00:16:08.184 } 00:16:08.184 } 00:16:08.184 ]' 00:16:08.184 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:08.184 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:08.184 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:08.184 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:08.184 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:08.184 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.184 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.184 13:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.442 13:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MDlhZmU4ZTRiZTcwMzBiMGJiZTJhNWU5ZWQyMDJmNTE5NDI0ZmQ0M2NjZTkxNjNkMDJlYzg2MTcxY2I1OTA5NrD6o+E=: 00:16:09.379 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.379 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:09.379 13:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.379 13:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.379 13:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.379 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:09.379 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:09.379 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:09.379 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:09.637 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:09.637 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:09.637 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:09.637 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:09.637 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:09.637 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.637 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.637 13:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.637 13:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.637 13:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.637 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.637 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.895 00:16:09.895 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:09.895 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:09.895 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.153 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.153 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.153 13:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.153 13:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.153 13:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.153 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.153 { 00:16:10.153 "cntlid": 17, 00:16:10.153 "qid": 0, 00:16:10.153 "state": "enabled", 00:16:10.153 "thread": "nvmf_tgt_poll_group_000", 00:16:10.153 "listen_address": { 00:16:10.153 "trtype": "TCP", 00:16:10.153 "adrfam": "IPv4", 00:16:10.153 "traddr": "10.0.0.2", 00:16:10.153 "trsvcid": "4420" 00:16:10.153 }, 00:16:10.153 "peer_address": { 00:16:10.153 "trtype": "TCP", 00:16:10.153 "adrfam": "IPv4", 00:16:10.153 "traddr": "10.0.0.1", 00:16:10.153 "trsvcid": "34522" 00:16:10.153 }, 00:16:10.153 "auth": { 00:16:10.153 "state": "completed", 00:16:10.153 "digest": "sha256", 00:16:10.153 "dhgroup": "ffdhe3072" 00:16:10.153 } 00:16:10.153 } 00:16:10.153 ]' 00:16:10.153 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.153 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.153 13:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.411 13:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:10.411 13:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:10.411 13:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.411 13:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.411 13:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.668 13:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Nzc1ZjI1MDM1NTRmYjVjMDJlOTAwZTAxM2U1NzM2YTZkMzAxNDc5ZTRhMjRkZjcyhKo8vA==: --dhchap-ctrl-secret DHHC-1:03:MmJlYmRhNmM0YTFkOTFjOTg1ZmEwOTZmYTMxOWIzN2EzYWFjYWIwNTFmMTEwNWIxNzFiNmIyMmU3OTJiMGFiOfuiRg8=: 00:16:11.604 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.604 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:11.604 13:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.604 13:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.604 13:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.604 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.604 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:11.604 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:11.863 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:11.863 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.863 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:11.863 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:11.863 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:11.863 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.863 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.863 13:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.863 13:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.863 13:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.863 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.863 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.120 00:16:12.120 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.120 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.120 13:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.377 13:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.377 13:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.377 13:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.377 13:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.377 13:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.377 13:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.377 { 00:16:12.377 "cntlid": 19, 00:16:12.377 "qid": 0, 00:16:12.377 "state": "enabled", 00:16:12.377 "thread": "nvmf_tgt_poll_group_000", 00:16:12.377 "listen_address": { 00:16:12.377 "trtype": "TCP", 00:16:12.377 "adrfam": "IPv4", 00:16:12.377 "traddr": "10.0.0.2", 00:16:12.377 "trsvcid": "4420" 00:16:12.377 }, 00:16:12.377 "peer_address": { 00:16:12.377 "trtype": "TCP", 00:16:12.377 "adrfam": "IPv4", 00:16:12.377 "traddr": "10.0.0.1", 00:16:12.377 "trsvcid": "34556" 00:16:12.377 }, 00:16:12.377 "auth": { 00:16:12.377 "state": "completed", 00:16:12.377 "digest": "sha256", 00:16:12.377 "dhgroup": "ffdhe3072" 00:16:12.377 } 00:16:12.377 } 00:16:12.377 ]' 00:16:12.377 13:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.377 13:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.377 13:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.377 13:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:12.377 13:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.635 13:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.635 13:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.635 13:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.893 13:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YTI1MTEzZjM0NjFkNTkwYjUwNjc5ODlmOTNhZjc3NTFLAk7g: --dhchap-ctrl-secret DHHC-1:02:NjMxYTExZjU0NmQ1NTlhYjcwZDhkODc5NTk3ZjI1OTliZjJlZWFmNTdhNjdlYmRlkGfA3A==: 00:16:13.828 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.828 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:13.828 13:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.828 13:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.828 13:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.828 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.828 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:13.828 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:13.828 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:13.828 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.828 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:13.828 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:13.828 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:13.828 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.828 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.828 13:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.828 13:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.828 13:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.828 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.828 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.394 00:16:14.394 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.394 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.394 13:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.394 13:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.394 13:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.394 13:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.394 13:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.394 13:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.394 13:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.394 { 00:16:14.394 "cntlid": 21, 00:16:14.394 "qid": 0, 00:16:14.394 "state": "enabled", 00:16:14.394 "thread": "nvmf_tgt_poll_group_000", 00:16:14.394 "listen_address": { 00:16:14.394 "trtype": "TCP", 00:16:14.394 "adrfam": "IPv4", 00:16:14.394 "traddr": "10.0.0.2", 00:16:14.394 "trsvcid": "4420" 00:16:14.394 }, 00:16:14.394 "peer_address": { 00:16:14.394 "trtype": "TCP", 00:16:14.394 "adrfam": "IPv4", 00:16:14.394 "traddr": "10.0.0.1", 00:16:14.394 "trsvcid": "34590" 00:16:14.394 }, 00:16:14.394 "auth": { 00:16:14.394 "state": "completed", 00:16:14.394 "digest": "sha256", 00:16:14.394 "dhgroup": "ffdhe3072" 00:16:14.394 } 00:16:14.394 } 00:16:14.394 ]' 00:16:14.394 13:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.652 13:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.652 13:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.652 13:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:14.652 13:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.652 13:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.652 13:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.652 13:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.910 13:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OWRhMmY3NmQxYmFmZTdlZDRlOTkzODdjN2JhNDI4ZGVhMmFlN2I2MTcyNThjNDMw/g+/BQ==: --dhchap-ctrl-secret DHHC-1:01:MTI0NWE4YzEzY2UzYmU1NWVhNmFjNWUwYTU1NWI3M2TBB42v: 00:16:15.846 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.846 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:15.846 13:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.846 13:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.846 13:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.846 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.846 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:15.846 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:16.105 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:16.105 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.105 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:16.105 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:16.105 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:16.105 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.105 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:16.105 13:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.105 13:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.105 13:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.105 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:16.105 13:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:16.362 00:16:16.362 13:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.362 13:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.362 13:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.620 13:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.620 13:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.620 13:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.620 13:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.620 13:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.620 13:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:16.620 { 00:16:16.620 "cntlid": 23, 00:16:16.620 "qid": 0, 00:16:16.620 "state": "enabled", 00:16:16.620 "thread": "nvmf_tgt_poll_group_000", 00:16:16.620 "listen_address": { 00:16:16.620 "trtype": "TCP", 00:16:16.620 "adrfam": "IPv4", 00:16:16.620 "traddr": "10.0.0.2", 00:16:16.620 "trsvcid": "4420" 00:16:16.620 }, 00:16:16.620 "peer_address": { 00:16:16.620 "trtype": "TCP", 00:16:16.620 "adrfam": "IPv4", 00:16:16.620 "traddr": "10.0.0.1", 00:16:16.620 "trsvcid": "51874" 00:16:16.620 }, 00:16:16.620 "auth": { 00:16:16.620 "state": "completed", 00:16:16.620 "digest": "sha256", 00:16:16.620 "dhgroup": "ffdhe3072" 00:16:16.620 } 00:16:16.620 } 00:16:16.620 ]' 00:16:16.620 13:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:16.620 13:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.620 13:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.877 13:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:16.877 13:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.877 13:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.877 13:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.877 13:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.134 13:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MDlhZmU4ZTRiZTcwMzBiMGJiZTJhNWU5ZWQyMDJmNTE5NDI0ZmQ0M2NjZTkxNjNkMDJlYzg2MTcxY2I1OTA5NrD6o+E=: 00:16:18.068 13:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.068 13:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:18.068 13:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.068 13:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.068 13:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.068 13:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:18.068 13:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.068 13:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:18.068 13:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:18.325 13:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:18.325 13:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.325 13:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:18.325 13:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:18.325 13:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:18.325 13:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.325 13:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.325 13:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.325 13:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.325 13:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.325 13:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.325 13:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.584 00:16:18.584 13:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:18.584 13:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:18.584 13:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.844 13:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.844 13:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.844 13:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.844 13:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.844 13:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.844 13:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:18.844 { 00:16:18.844 "cntlid": 25, 00:16:18.844 "qid": 0, 00:16:18.844 "state": "enabled", 00:16:18.844 "thread": "nvmf_tgt_poll_group_000", 00:16:18.844 "listen_address": { 00:16:18.844 "trtype": "TCP", 00:16:18.844 "adrfam": "IPv4", 00:16:18.844 "traddr": "10.0.0.2", 00:16:18.844 "trsvcid": "4420" 00:16:18.844 }, 00:16:18.844 "peer_address": { 00:16:18.844 "trtype": "TCP", 00:16:18.844 "adrfam": "IPv4", 00:16:18.844 "traddr": "10.0.0.1", 00:16:18.844 "trsvcid": "51898" 00:16:18.844 }, 00:16:18.844 "auth": { 00:16:18.844 "state": "completed", 00:16:18.845 "digest": "sha256", 00:16:18.845 "dhgroup": "ffdhe4096" 00:16:18.845 } 00:16:18.845 } 00:16:18.845 ]' 00:16:18.845 13:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:18.845 13:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:18.845 13:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.100 13:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:19.100 13:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.101 13:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.101 13:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.101 13:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.356 13:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Nzc1ZjI1MDM1NTRmYjVjMDJlOTAwZTAxM2U1NzM2YTZkMzAxNDc5ZTRhMjRkZjcyhKo8vA==: --dhchap-ctrl-secret DHHC-1:03:MmJlYmRhNmM0YTFkOTFjOTg1ZmEwOTZmYTMxOWIzN2EzYWFjYWIwNTFmMTEwNWIxNzFiNmIyMmU3OTJiMGFiOfuiRg8=: 00:16:20.291 13:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.291 13:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:20.291 13:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.291 13:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.291 13:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.292 13:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.292 13:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:20.292 13:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:20.549 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:20.549 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.549 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:20.549 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:20.549 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:20.549 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.549 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.549 13:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.549 13:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.549 13:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.549 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.549 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.114 00:16:21.114 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:21.114 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:21.114 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.114 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.114 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.114 13:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.114 13:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.372 13:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.372 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.372 { 00:16:21.372 "cntlid": 27, 00:16:21.372 "qid": 0, 00:16:21.372 "state": "enabled", 00:16:21.372 "thread": "nvmf_tgt_poll_group_000", 00:16:21.372 "listen_address": { 00:16:21.372 "trtype": "TCP", 00:16:21.372 "adrfam": "IPv4", 00:16:21.372 "traddr": "10.0.0.2", 00:16:21.372 "trsvcid": "4420" 00:16:21.372 }, 00:16:21.372 "peer_address": { 00:16:21.372 "trtype": "TCP", 00:16:21.372 "adrfam": "IPv4", 00:16:21.372 "traddr": "10.0.0.1", 00:16:21.372 "trsvcid": "51910" 00:16:21.372 }, 00:16:21.372 "auth": { 00:16:21.372 "state": "completed", 00:16:21.372 "digest": "sha256", 00:16:21.372 "dhgroup": "ffdhe4096" 00:16:21.372 } 00:16:21.372 } 00:16:21.372 ]' 00:16:21.372 13:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.372 13:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.372 13:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.372 13:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:21.372 13:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.372 13:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.372 13:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.372 13:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.630 13:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YTI1MTEzZjM0NjFkNTkwYjUwNjc5ODlmOTNhZjc3NTFLAk7g: --dhchap-ctrl-secret DHHC-1:02:NjMxYTExZjU0NmQ1NTlhYjcwZDhkODc5NTk3ZjI1OTliZjJlZWFmNTdhNjdlYmRlkGfA3A==: 00:16:22.565 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.565 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:22.565 13:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.565 13:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.565 13:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.565 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:22.565 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:22.565 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:22.822 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:22.822 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.822 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:22.822 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:22.822 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:22.822 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.823 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.823 13:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.823 13:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.823 13:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.823 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.823 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.388 00:16:23.388 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.388 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.388 13:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.645 13:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.645 13:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.645 13:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.645 13:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.645 13:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.645 13:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:23.645 { 00:16:23.645 "cntlid": 29, 00:16:23.645 "qid": 0, 00:16:23.645 "state": "enabled", 00:16:23.645 "thread": "nvmf_tgt_poll_group_000", 00:16:23.645 "listen_address": { 00:16:23.645 "trtype": "TCP", 00:16:23.645 "adrfam": "IPv4", 00:16:23.645 "traddr": "10.0.0.2", 00:16:23.645 "trsvcid": "4420" 00:16:23.645 }, 00:16:23.645 "peer_address": { 00:16:23.645 "trtype": "TCP", 00:16:23.645 "adrfam": "IPv4", 00:16:23.645 "traddr": "10.0.0.1", 00:16:23.645 "trsvcid": "51934" 00:16:23.645 }, 00:16:23.645 "auth": { 00:16:23.645 "state": "completed", 00:16:23.645 "digest": "sha256", 00:16:23.645 "dhgroup": "ffdhe4096" 00:16:23.645 } 00:16:23.645 } 00:16:23.645 ]' 00:16:23.645 13:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.645 13:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.645 13:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.645 13:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:23.645 13:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.645 13:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.645 13:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.645 13:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.902 13:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OWRhMmY3NmQxYmFmZTdlZDRlOTkzODdjN2JhNDI4ZGVhMmFlN2I2MTcyNThjNDMw/g+/BQ==: --dhchap-ctrl-secret DHHC-1:01:MTI0NWE4YzEzY2UzYmU1NWVhNmFjNWUwYTU1NWI3M2TBB42v: 00:16:24.841 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.841 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:24.841 13:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.841 13:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.841 13:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.841 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.841 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:24.841 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:25.098 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:25.098 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:25.098 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:25.098 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:25.098 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:25.098 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.098 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:25.098 13:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.098 13:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.098 13:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.098 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:25.098 13:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:25.665 00:16:25.665 13:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.665 13:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.665 13:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.665 13:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.665 13:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.665 13:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.665 13:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.923 13:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.923 13:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.923 { 00:16:25.923 "cntlid": 31, 00:16:25.923 "qid": 0, 00:16:25.923 "state": "enabled", 00:16:25.923 "thread": "nvmf_tgt_poll_group_000", 00:16:25.923 "listen_address": { 00:16:25.923 "trtype": "TCP", 00:16:25.923 "adrfam": "IPv4", 00:16:25.923 "traddr": "10.0.0.2", 00:16:25.923 "trsvcid": "4420" 00:16:25.923 }, 00:16:25.923 "peer_address": { 00:16:25.923 "trtype": "TCP", 00:16:25.923 "adrfam": "IPv4", 00:16:25.923 "traddr": "10.0.0.1", 00:16:25.923 "trsvcid": "51954" 00:16:25.923 }, 00:16:25.923 "auth": { 00:16:25.923 "state": "completed", 00:16:25.923 "digest": "sha256", 00:16:25.923 "dhgroup": "ffdhe4096" 00:16:25.923 } 00:16:25.923 } 00:16:25.923 ]' 00:16:25.923 13:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.923 13:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.923 13:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.923 13:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:25.923 13:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.923 13:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.923 13:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.923 13:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.180 13:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MDlhZmU4ZTRiZTcwMzBiMGJiZTJhNWU5ZWQyMDJmNTE5NDI0ZmQ0M2NjZTkxNjNkMDJlYzg2MTcxY2I1OTA5NrD6o+E=: 00:16:27.115 13:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.115 13:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:27.115 13:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.115 13:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.115 13:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.115 13:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:27.115 13:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.115 13:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:27.115 13:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:27.373 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:27.373 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.373 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:27.373 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:27.373 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:27.373 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.373 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.373 13:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.373 13:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.373 13:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.373 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.373 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.939 00:16:27.939 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.939 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.939 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.227 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.227 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.227 13:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.227 13:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.227 13:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.227 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.227 { 00:16:28.227 "cntlid": 33, 00:16:28.227 "qid": 0, 00:16:28.227 "state": "enabled", 00:16:28.227 "thread": "nvmf_tgt_poll_group_000", 00:16:28.227 "listen_address": { 00:16:28.227 "trtype": "TCP", 00:16:28.227 "adrfam": "IPv4", 00:16:28.227 "traddr": "10.0.0.2", 00:16:28.227 "trsvcid": "4420" 00:16:28.227 }, 00:16:28.227 "peer_address": { 00:16:28.227 "trtype": "TCP", 00:16:28.227 "adrfam": "IPv4", 00:16:28.227 "traddr": "10.0.0.1", 00:16:28.227 "trsvcid": "50696" 00:16:28.227 }, 00:16:28.227 "auth": { 00:16:28.227 "state": "completed", 00:16:28.227 "digest": "sha256", 00:16:28.227 "dhgroup": "ffdhe6144" 00:16:28.227 } 00:16:28.227 } 00:16:28.227 ]' 00:16:28.227 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.227 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.227 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.227 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:28.227 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.227 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.227 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.227 13:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.539 13:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Nzc1ZjI1MDM1NTRmYjVjMDJlOTAwZTAxM2U1NzM2YTZkMzAxNDc5ZTRhMjRkZjcyhKo8vA==: --dhchap-ctrl-secret DHHC-1:03:MmJlYmRhNmM0YTFkOTFjOTg1ZmEwOTZmYTMxOWIzN2EzYWFjYWIwNTFmMTEwNWIxNzFiNmIyMmU3OTJiMGFiOfuiRg8=: 00:16:29.472 13:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.472 13:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:29.472 13:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.472 13:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.472 13:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.472 13:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.472 13:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:29.472 13:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:29.729 13:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:29.729 13:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.729 13:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:29.729 13:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:29.729 13:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:29.729 13:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.729 13:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.729 13:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.729 13:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.729 13:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.729 13:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.729 13:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.295 00:16:30.295 13:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:30.295 13:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:30.295 13:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.551 13:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.551 13:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.551 13:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.551 13:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.551 13:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.551 13:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:30.551 { 00:16:30.551 "cntlid": 35, 00:16:30.551 "qid": 0, 00:16:30.551 "state": "enabled", 00:16:30.551 "thread": "nvmf_tgt_poll_group_000", 00:16:30.551 "listen_address": { 00:16:30.551 "trtype": "TCP", 00:16:30.551 "adrfam": "IPv4", 00:16:30.552 "traddr": "10.0.0.2", 00:16:30.552 "trsvcid": "4420" 00:16:30.552 }, 00:16:30.552 "peer_address": { 00:16:30.552 "trtype": "TCP", 00:16:30.552 "adrfam": "IPv4", 00:16:30.552 "traddr": "10.0.0.1", 00:16:30.552 "trsvcid": "50730" 00:16:30.552 }, 00:16:30.552 "auth": { 00:16:30.552 "state": "completed", 00:16:30.552 "digest": "sha256", 00:16:30.552 "dhgroup": "ffdhe6144" 00:16:30.552 } 00:16:30.552 } 00:16:30.552 ]' 00:16:30.552 13:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.552 13:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.552 13:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.552 13:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:30.552 13:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.552 13:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.552 13:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.552 13:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.810 13:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YTI1MTEzZjM0NjFkNTkwYjUwNjc5ODlmOTNhZjc3NTFLAk7g: --dhchap-ctrl-secret DHHC-1:02:NjMxYTExZjU0NmQ1NTlhYjcwZDhkODc5NTk3ZjI1OTliZjJlZWFmNTdhNjdlYmRlkGfA3A==: 00:16:31.739 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.739 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:31.739 13:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.739 13:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.739 13:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.739 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.739 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:31.739 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:32.303 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:32.303 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.303 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:32.303 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:32.303 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:32.303 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.303 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.303 13:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.303 13:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.304 13:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.304 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.304 13:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.614 00:16:32.614 13:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.614 13:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.614 13:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.871 13:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.871 13:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.871 13:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.871 13:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.871 13:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.871 13:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.871 { 00:16:32.871 "cntlid": 37, 00:16:32.871 "qid": 0, 00:16:32.871 "state": "enabled", 00:16:32.871 "thread": "nvmf_tgt_poll_group_000", 00:16:32.871 "listen_address": { 00:16:32.871 "trtype": "TCP", 00:16:32.871 "adrfam": "IPv4", 00:16:32.871 "traddr": "10.0.0.2", 00:16:32.871 "trsvcid": "4420" 00:16:32.871 }, 00:16:32.871 "peer_address": { 00:16:32.871 "trtype": "TCP", 00:16:32.871 "adrfam": "IPv4", 00:16:32.871 "traddr": "10.0.0.1", 00:16:32.871 "trsvcid": "50748" 00:16:32.871 }, 00:16:32.871 "auth": { 00:16:32.871 "state": "completed", 00:16:32.871 "digest": "sha256", 00:16:32.871 "dhgroup": "ffdhe6144" 00:16:32.871 } 00:16:32.871 } 00:16:32.871 ]' 00:16:32.871 13:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.871 13:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.871 13:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.128 13:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:33.128 13:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.128 13:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.128 13:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.128 13:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.384 13:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OWRhMmY3NmQxYmFmZTdlZDRlOTkzODdjN2JhNDI4ZGVhMmFlN2I2MTcyNThjNDMw/g+/BQ==: --dhchap-ctrl-secret DHHC-1:01:MTI0NWE4YzEzY2UzYmU1NWVhNmFjNWUwYTU1NWI3M2TBB42v: 00:16:34.315 13:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.315 13:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:34.315 13:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.315 13:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.315 13:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.315 13:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:34.315 13:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:34.315 13:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:34.574 13:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:34.574 13:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:34.574 13:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:34.574 13:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:34.574 13:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:34.574 13:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.574 13:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:34.574 13:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.574 13:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.574 13:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.574 13:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:34.574 13:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:35.141 00:16:35.141 13:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.141 13:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.141 13:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.141 13:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.141 13:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.141 13:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.141 13:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.141 13:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.141 13:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.141 { 00:16:35.141 "cntlid": 39, 00:16:35.141 "qid": 0, 00:16:35.141 "state": "enabled", 00:16:35.141 "thread": "nvmf_tgt_poll_group_000", 00:16:35.141 "listen_address": { 00:16:35.141 "trtype": "TCP", 00:16:35.141 "adrfam": "IPv4", 00:16:35.141 "traddr": "10.0.0.2", 00:16:35.141 "trsvcid": "4420" 00:16:35.141 }, 00:16:35.141 "peer_address": { 00:16:35.141 "trtype": "TCP", 00:16:35.141 "adrfam": "IPv4", 00:16:35.141 "traddr": "10.0.0.1", 00:16:35.141 "trsvcid": "50782" 00:16:35.141 }, 00:16:35.141 "auth": { 00:16:35.141 "state": "completed", 00:16:35.141 "digest": "sha256", 00:16:35.141 "dhgroup": "ffdhe6144" 00:16:35.141 } 00:16:35.141 } 00:16:35.141 ]' 00:16:35.141 13:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.399 13:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.399 13:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:35.399 13:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:35.399 13:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:35.399 13:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.399 13:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.399 13:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.656 13:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MDlhZmU4ZTRiZTcwMzBiMGJiZTJhNWU5ZWQyMDJmNTE5NDI0ZmQ0M2NjZTkxNjNkMDJlYzg2MTcxY2I1OTA5NrD6o+E=: 00:16:36.594 13:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.594 13:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:36.594 13:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.594 13:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.594 13:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.594 13:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:36.594 13:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:36.594 13:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:36.594 13:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:36.851 13:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:36.851 13:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:36.851 13:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:36.851 13:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:36.851 13:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:36.851 13:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.851 13:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.851 13:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.851 13:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.851 13:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.851 13:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.851 13:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.798 00:16:37.798 13:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.798 13:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.798 13:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.798 13:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.798 13:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.798 13:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.798 13:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.798 13:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.798 13:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.798 { 00:16:37.798 "cntlid": 41, 00:16:37.798 "qid": 0, 00:16:37.798 "state": "enabled", 00:16:37.798 "thread": "nvmf_tgt_poll_group_000", 00:16:37.798 "listen_address": { 00:16:37.798 "trtype": "TCP", 00:16:37.798 "adrfam": "IPv4", 00:16:37.798 "traddr": "10.0.0.2", 00:16:37.798 "trsvcid": "4420" 00:16:37.798 }, 00:16:37.798 "peer_address": { 00:16:37.798 "trtype": "TCP", 00:16:37.798 "adrfam": "IPv4", 00:16:37.798 "traddr": "10.0.0.1", 00:16:37.798 "trsvcid": "53114" 00:16:37.798 }, 00:16:37.798 "auth": { 00:16:37.798 "state": "completed", 00:16:37.798 "digest": "sha256", 00:16:37.798 "dhgroup": "ffdhe8192" 00:16:37.798 } 00:16:37.798 } 00:16:37.798 ]' 00:16:37.798 13:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.798 13:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.798 13:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.057 13:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:38.057 13:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.057 13:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.057 13:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.057 13:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.315 13:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Nzc1ZjI1MDM1NTRmYjVjMDJlOTAwZTAxM2U1NzM2YTZkMzAxNDc5ZTRhMjRkZjcyhKo8vA==: --dhchap-ctrl-secret DHHC-1:03:MmJlYmRhNmM0YTFkOTFjOTg1ZmEwOTZmYTMxOWIzN2EzYWFjYWIwNTFmMTEwNWIxNzFiNmIyMmU3OTJiMGFiOfuiRg8=: 00:16:39.253 13:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.254 13:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:39.254 13:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.254 13:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.254 13:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.254 13:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.254 13:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:39.254 13:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:39.511 13:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:16:39.511 13:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:39.511 13:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:39.511 13:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:39.511 13:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:39.511 13:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.511 13:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.511 13:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.511 13:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.511 13:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.511 13:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.511 13:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.448 00:16:40.448 13:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.448 13:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.448 13:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.448 13:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.448 13:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.448 13:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.448 13:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.448 13:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.448 13:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.448 { 00:16:40.448 "cntlid": 43, 00:16:40.448 "qid": 0, 00:16:40.448 "state": "enabled", 00:16:40.448 "thread": "nvmf_tgt_poll_group_000", 00:16:40.448 "listen_address": { 00:16:40.448 "trtype": "TCP", 00:16:40.448 "adrfam": "IPv4", 00:16:40.448 "traddr": "10.0.0.2", 00:16:40.448 "trsvcid": "4420" 00:16:40.448 }, 00:16:40.448 "peer_address": { 00:16:40.448 "trtype": "TCP", 00:16:40.448 "adrfam": "IPv4", 00:16:40.448 "traddr": "10.0.0.1", 00:16:40.448 "trsvcid": "53144" 00:16:40.448 }, 00:16:40.448 "auth": { 00:16:40.448 "state": "completed", 00:16:40.448 "digest": "sha256", 00:16:40.448 "dhgroup": "ffdhe8192" 00:16:40.448 } 00:16:40.448 } 00:16:40.448 ]' 00:16:40.448 13:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.448 13:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.448 13:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.706 13:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:40.706 13:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.706 13:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.706 13:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.706 13:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.963 13:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YTI1MTEzZjM0NjFkNTkwYjUwNjc5ODlmOTNhZjc3NTFLAk7g: --dhchap-ctrl-secret DHHC-1:02:NjMxYTExZjU0NmQ1NTlhYjcwZDhkODc5NTk3ZjI1OTliZjJlZWFmNTdhNjdlYmRlkGfA3A==: 00:16:41.897 13:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.897 13:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:41.897 13:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.897 13:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.897 13:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.897 13:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.897 13:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:41.897 13:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:42.155 13:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:16:42.155 13:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.155 13:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:42.155 13:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:42.155 13:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:42.155 13:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.155 13:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.155 13:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.155 13:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.155 13:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.155 13:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.155 13:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.089 00:16:43.089 13:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.089 13:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.089 13:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.089 13:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.089 13:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.089 13:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.089 13:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.089 13:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.089 13:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.089 { 00:16:43.089 "cntlid": 45, 00:16:43.089 "qid": 0, 00:16:43.089 "state": "enabled", 00:16:43.089 "thread": "nvmf_tgt_poll_group_000", 00:16:43.089 "listen_address": { 00:16:43.089 "trtype": "TCP", 00:16:43.089 "adrfam": "IPv4", 00:16:43.089 "traddr": "10.0.0.2", 00:16:43.089 "trsvcid": "4420" 00:16:43.089 }, 00:16:43.089 "peer_address": { 00:16:43.089 "trtype": "TCP", 00:16:43.089 "adrfam": "IPv4", 00:16:43.089 "traddr": "10.0.0.1", 00:16:43.089 "trsvcid": "53166" 00:16:43.089 }, 00:16:43.089 "auth": { 00:16:43.089 "state": "completed", 00:16:43.089 "digest": "sha256", 00:16:43.089 "dhgroup": "ffdhe8192" 00:16:43.089 } 00:16:43.089 } 00:16:43.089 ]' 00:16:43.089 13:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.089 13:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.089 13:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.089 13:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:43.089 13:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.346 13:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.346 13:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.346 13:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.604 13:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OWRhMmY3NmQxYmFmZTdlZDRlOTkzODdjN2JhNDI4ZGVhMmFlN2I2MTcyNThjNDMw/g+/BQ==: --dhchap-ctrl-secret DHHC-1:01:MTI0NWE4YzEzY2UzYmU1NWVhNmFjNWUwYTU1NWI3M2TBB42v: 00:16:44.539 13:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.539 13:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:44.539 13:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.539 13:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.539 13:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.539 13:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.539 13:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:44.539 13:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:44.539 13:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:16:44.539 13:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.539 13:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:44.539 13:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:44.539 13:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:44.539 13:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.539 13:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:44.539 13:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.539 13:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.539 13:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.539 13:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:44.540 13:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:45.473 00:16:45.473 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.473 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.473 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.730 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.730 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.730 13:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.730 13:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.730 13:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.730 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.730 { 00:16:45.730 "cntlid": 47, 00:16:45.730 "qid": 0, 00:16:45.730 "state": "enabled", 00:16:45.730 "thread": "nvmf_tgt_poll_group_000", 00:16:45.730 "listen_address": { 00:16:45.730 "trtype": "TCP", 00:16:45.730 "adrfam": "IPv4", 00:16:45.730 "traddr": "10.0.0.2", 00:16:45.730 "trsvcid": "4420" 00:16:45.730 }, 00:16:45.730 "peer_address": { 00:16:45.730 "trtype": "TCP", 00:16:45.730 "adrfam": "IPv4", 00:16:45.730 "traddr": "10.0.0.1", 00:16:45.730 "trsvcid": "53192" 00:16:45.730 }, 00:16:45.730 "auth": { 00:16:45.730 "state": "completed", 00:16:45.730 "digest": "sha256", 00:16:45.731 "dhgroup": "ffdhe8192" 00:16:45.731 } 00:16:45.731 } 00:16:45.731 ]' 00:16:45.731 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.731 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.731 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.731 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:45.731 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.731 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.731 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.731 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.016 13:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MDlhZmU4ZTRiZTcwMzBiMGJiZTJhNWU5ZWQyMDJmNTE5NDI0ZmQ0M2NjZTkxNjNkMDJlYzg2MTcxY2I1OTA5NrD6o+E=: 00:16:46.950 13:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.950 13:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:46.950 13:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.950 13:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.950 13:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.950 13:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:46.950 13:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.950 13:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.950 13:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:46.950 13:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:47.207 13:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:16:47.207 13:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.207 13:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:47.207 13:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:47.207 13:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:47.207 13:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.207 13:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.207 13:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.207 13:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.207 13:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.207 13:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.207 13:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.772 00:16:47.772 13:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.772 13:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.772 13:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.772 13:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.772 13:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.772 13:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.772 13:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.772 13:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.772 13:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.772 { 00:16:47.772 "cntlid": 49, 00:16:47.772 "qid": 0, 00:16:47.772 "state": "enabled", 00:16:47.772 "thread": "nvmf_tgt_poll_group_000", 00:16:47.772 "listen_address": { 00:16:47.772 "trtype": "TCP", 00:16:47.772 "adrfam": "IPv4", 00:16:47.772 "traddr": "10.0.0.2", 00:16:47.772 "trsvcid": "4420" 00:16:47.772 }, 00:16:47.772 "peer_address": { 00:16:47.772 "trtype": "TCP", 00:16:47.772 "adrfam": "IPv4", 00:16:47.772 "traddr": "10.0.0.1", 00:16:47.772 "trsvcid": "54134" 00:16:47.772 }, 00:16:47.772 "auth": { 00:16:47.772 "state": "completed", 00:16:47.772 "digest": "sha384", 00:16:47.772 "dhgroup": "null" 00:16:47.772 } 00:16:47.772 } 00:16:47.772 ]' 00:16:47.772 13:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:48.030 13:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:48.030 13:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.030 13:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:48.030 13:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.030 13:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.030 13:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.030 13:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.287 13:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Nzc1ZjI1MDM1NTRmYjVjMDJlOTAwZTAxM2U1NzM2YTZkMzAxNDc5ZTRhMjRkZjcyhKo8vA==: --dhchap-ctrl-secret DHHC-1:03:MmJlYmRhNmM0YTFkOTFjOTg1ZmEwOTZmYTMxOWIzN2EzYWFjYWIwNTFmMTEwNWIxNzFiNmIyMmU3OTJiMGFiOfuiRg8=: 00:16:49.219 13:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.219 13:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:49.219 13:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.219 13:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.219 13:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.219 13:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.219 13:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:49.219 13:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:49.476 13:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:16:49.476 13:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.476 13:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:49.476 13:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:49.476 13:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:49.476 13:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.476 13:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.476 13:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.476 13:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.476 13:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.476 13:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.476 13:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.733 00:16:49.990 13:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.990 13:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.990 13:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.262 13:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.262 13:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.262 13:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.262 13:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.262 13:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.262 13:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:50.262 { 00:16:50.262 "cntlid": 51, 00:16:50.262 "qid": 0, 00:16:50.263 "state": "enabled", 00:16:50.263 "thread": "nvmf_tgt_poll_group_000", 00:16:50.263 "listen_address": { 00:16:50.263 "trtype": "TCP", 00:16:50.263 "adrfam": "IPv4", 00:16:50.263 "traddr": "10.0.0.2", 00:16:50.263 "trsvcid": "4420" 00:16:50.263 }, 00:16:50.263 "peer_address": { 00:16:50.263 "trtype": "TCP", 00:16:50.263 "adrfam": "IPv4", 00:16:50.263 "traddr": "10.0.0.1", 00:16:50.263 "trsvcid": "54156" 00:16:50.263 }, 00:16:50.263 "auth": { 00:16:50.263 "state": "completed", 00:16:50.263 "digest": "sha384", 00:16:50.263 "dhgroup": "null" 00:16:50.263 } 00:16:50.263 } 00:16:50.263 ]' 00:16:50.263 13:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:50.263 13:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:50.263 13:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.263 13:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:50.263 13:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.263 13:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.263 13:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.263 13:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.593 13:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YTI1MTEzZjM0NjFkNTkwYjUwNjc5ODlmOTNhZjc3NTFLAk7g: --dhchap-ctrl-secret DHHC-1:02:NjMxYTExZjU0NmQ1NTlhYjcwZDhkODc5NTk3ZjI1OTliZjJlZWFmNTdhNjdlYmRlkGfA3A==: 00:16:51.531 13:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.531 13:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:51.531 13:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.531 13:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.531 13:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.531 13:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:51.531 13:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:51.531 13:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:51.792 13:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:16:51.792 13:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.792 13:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:51.792 13:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:51.792 13:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:51.792 13:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.792 13:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.792 13:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.792 13:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.792 13:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.792 13:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.792 13:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.050 00:16:52.050 13:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:52.050 13:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.050 13:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.308 13:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.308 13:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.308 13:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.308 13:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.308 13:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.308 13:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.308 { 00:16:52.308 "cntlid": 53, 00:16:52.308 "qid": 0, 00:16:52.308 "state": "enabled", 00:16:52.308 "thread": "nvmf_tgt_poll_group_000", 00:16:52.308 "listen_address": { 00:16:52.308 "trtype": "TCP", 00:16:52.308 "adrfam": "IPv4", 00:16:52.308 "traddr": "10.0.0.2", 00:16:52.308 "trsvcid": "4420" 00:16:52.308 }, 00:16:52.308 "peer_address": { 00:16:52.308 "trtype": "TCP", 00:16:52.308 "adrfam": "IPv4", 00:16:52.308 "traddr": "10.0.0.1", 00:16:52.308 "trsvcid": "54186" 00:16:52.308 }, 00:16:52.308 "auth": { 00:16:52.308 "state": "completed", 00:16:52.308 "digest": "sha384", 00:16:52.308 "dhgroup": "null" 00:16:52.308 } 00:16:52.308 } 00:16:52.308 ]' 00:16:52.308 13:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.308 13:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.308 13:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:52.308 13:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:52.308 13:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.566 13:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.566 13:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.566 13:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.824 13:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OWRhMmY3NmQxYmFmZTdlZDRlOTkzODdjN2JhNDI4ZGVhMmFlN2I2MTcyNThjNDMw/g+/BQ==: --dhchap-ctrl-secret DHHC-1:01:MTI0NWE4YzEzY2UzYmU1NWVhNmFjNWUwYTU1NWI3M2TBB42v: 00:16:53.760 13:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.760 13:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:53.760 13:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.760 13:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.760 13:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.760 13:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:53.760 13:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:53.760 13:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:54.018 13:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:16:54.018 13:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.018 13:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:54.018 13:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:54.018 13:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:54.018 13:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.018 13:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:54.018 13:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.018 13:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.018 13:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.018 13:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:54.018 13:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:54.276 00:16:54.276 13:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.276 13:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.276 13:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.533 13:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.533 13:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.533 13:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.533 13:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.533 13:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.533 13:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.533 { 00:16:54.533 "cntlid": 55, 00:16:54.533 "qid": 0, 00:16:54.533 "state": "enabled", 00:16:54.533 "thread": "nvmf_tgt_poll_group_000", 00:16:54.533 "listen_address": { 00:16:54.533 "trtype": "TCP", 00:16:54.533 "adrfam": "IPv4", 00:16:54.533 "traddr": "10.0.0.2", 00:16:54.533 "trsvcid": "4420" 00:16:54.533 }, 00:16:54.533 "peer_address": { 00:16:54.533 "trtype": "TCP", 00:16:54.533 "adrfam": "IPv4", 00:16:54.533 "traddr": "10.0.0.1", 00:16:54.533 "trsvcid": "54230" 00:16:54.533 }, 00:16:54.533 "auth": { 00:16:54.533 "state": "completed", 00:16:54.533 "digest": "sha384", 00:16:54.533 "dhgroup": "null" 00:16:54.533 } 00:16:54.533 } 00:16:54.533 ]' 00:16:54.533 13:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.533 13:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.533 13:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.533 13:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:54.533 13:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.533 13:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.533 13:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.533 13:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.791 13:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MDlhZmU4ZTRiZTcwMzBiMGJiZTJhNWU5ZWQyMDJmNTE5NDI0ZmQ0M2NjZTkxNjNkMDJlYzg2MTcxY2I1OTA5NrD6o+E=: 00:16:55.726 13:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.726 13:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:55.726 13:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.726 13:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.726 13:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.726 13:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.726 13:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.726 13:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:55.726 13:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:55.984 13:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:16:55.984 13:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.984 13:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:55.984 13:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:55.984 13:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:55.984 13:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.984 13:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.984 13:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.984 13:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.984 13:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.984 13:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.984 13:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.241 00:16:56.241 13:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.241 13:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.241 13:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.499 13:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.499 13:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.499 13:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.499 13:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.499 13:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.499 13:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.499 { 00:16:56.499 "cntlid": 57, 00:16:56.499 "qid": 0, 00:16:56.499 "state": "enabled", 00:16:56.499 "thread": "nvmf_tgt_poll_group_000", 00:16:56.499 "listen_address": { 00:16:56.499 "trtype": "TCP", 00:16:56.499 "adrfam": "IPv4", 00:16:56.499 "traddr": "10.0.0.2", 00:16:56.499 "trsvcid": "4420" 00:16:56.499 }, 00:16:56.499 "peer_address": { 00:16:56.499 "trtype": "TCP", 00:16:56.499 "adrfam": "IPv4", 00:16:56.499 "traddr": "10.0.0.1", 00:16:56.499 "trsvcid": "39862" 00:16:56.499 }, 00:16:56.499 "auth": { 00:16:56.499 "state": "completed", 00:16:56.499 "digest": "sha384", 00:16:56.499 "dhgroup": "ffdhe2048" 00:16:56.499 } 00:16:56.499 } 00:16:56.499 ]' 00:16:56.499 13:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.757 13:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.757 13:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.757 13:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:56.757 13:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.757 13:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.757 13:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.757 13:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.014 13:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Nzc1ZjI1MDM1NTRmYjVjMDJlOTAwZTAxM2U1NzM2YTZkMzAxNDc5ZTRhMjRkZjcyhKo8vA==: --dhchap-ctrl-secret DHHC-1:03:MmJlYmRhNmM0YTFkOTFjOTg1ZmEwOTZmYTMxOWIzN2EzYWFjYWIwNTFmMTEwNWIxNzFiNmIyMmU3OTJiMGFiOfuiRg8=: 00:16:57.949 13:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.949 13:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:57.949 13:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.949 13:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.949 13:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.949 13:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.949 13:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:57.949 13:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:57.949 13:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:16:57.949 13:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.949 13:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:57.949 13:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:57.949 13:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:57.949 13:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.949 13:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.949 13:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.949 13:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.949 13:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.949 13:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.949 13:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.514 00:16:58.514 13:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.514 13:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.514 13:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.772 13:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.772 13:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.772 13:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.772 13:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.772 13:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.772 13:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.772 { 00:16:58.772 "cntlid": 59, 00:16:58.772 "qid": 0, 00:16:58.772 "state": "enabled", 00:16:58.772 "thread": "nvmf_tgt_poll_group_000", 00:16:58.772 "listen_address": { 00:16:58.772 "trtype": "TCP", 00:16:58.772 "adrfam": "IPv4", 00:16:58.772 "traddr": "10.0.0.2", 00:16:58.772 "trsvcid": "4420" 00:16:58.772 }, 00:16:58.772 "peer_address": { 00:16:58.772 "trtype": "TCP", 00:16:58.772 "adrfam": "IPv4", 00:16:58.772 "traddr": "10.0.0.1", 00:16:58.772 "trsvcid": "39896" 00:16:58.772 }, 00:16:58.772 "auth": { 00:16:58.772 "state": "completed", 00:16:58.772 "digest": "sha384", 00:16:58.772 "dhgroup": "ffdhe2048" 00:16:58.772 } 00:16:58.772 } 00:16:58.772 ]' 00:16:58.772 13:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.772 13:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.772 13:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.772 13:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:58.772 13:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.772 13:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.772 13:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.772 13:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.030 13:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YTI1MTEzZjM0NjFkNTkwYjUwNjc5ODlmOTNhZjc3NTFLAk7g: --dhchap-ctrl-secret DHHC-1:02:NjMxYTExZjU0NmQ1NTlhYjcwZDhkODc5NTk3ZjI1OTliZjJlZWFmNTdhNjdlYmRlkGfA3A==: 00:16:59.965 13:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.965 13:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:59.965 13:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.965 13:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.965 13:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.965 13:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.965 13:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:59.965 13:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:00.223 13:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:00.223 13:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:00.223 13:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:00.223 13:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:00.223 13:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:00.223 13:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.223 13:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.223 13:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.223 13:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.223 13:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.223 13:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.223 13:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.480 00:17:00.480 13:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.480 13:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.480 13:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.737 13:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.737 13:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.737 13:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.737 13:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.737 13:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.737 13:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.737 { 00:17:00.737 "cntlid": 61, 00:17:00.737 "qid": 0, 00:17:00.737 "state": "enabled", 00:17:00.737 "thread": "nvmf_tgt_poll_group_000", 00:17:00.737 "listen_address": { 00:17:00.737 "trtype": "TCP", 00:17:00.737 "adrfam": "IPv4", 00:17:00.737 "traddr": "10.0.0.2", 00:17:00.737 "trsvcid": "4420" 00:17:00.737 }, 00:17:00.737 "peer_address": { 00:17:00.737 "trtype": "TCP", 00:17:00.737 "adrfam": "IPv4", 00:17:00.737 "traddr": "10.0.0.1", 00:17:00.737 "trsvcid": "39920" 00:17:00.737 }, 00:17:00.737 "auth": { 00:17:00.737 "state": "completed", 00:17:00.737 "digest": "sha384", 00:17:00.737 "dhgroup": "ffdhe2048" 00:17:00.737 } 00:17:00.737 } 00:17:00.737 ]' 00:17:00.737 13:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.737 13:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:00.737 13:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.737 13:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:00.738 13:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.997 13:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.997 13:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.997 13:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.255 13:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OWRhMmY3NmQxYmFmZTdlZDRlOTkzODdjN2JhNDI4ZGVhMmFlN2I2MTcyNThjNDMw/g+/BQ==: --dhchap-ctrl-secret DHHC-1:01:MTI0NWE4YzEzY2UzYmU1NWVhNmFjNWUwYTU1NWI3M2TBB42v: 00:17:02.186 13:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.186 13:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:02.186 13:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.186 13:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.186 13:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.186 13:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.186 13:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:02.186 13:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:02.445 13:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:02.445 13:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.445 13:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:02.445 13:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:02.445 13:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:02.445 13:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.445 13:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:02.445 13:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.445 13:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.445 13:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.445 13:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:02.445 13:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:02.702 00:17:02.702 13:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.702 13:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.702 13:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.958 13:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.958 13:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.958 13:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.958 13:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.958 13:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.958 13:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.958 { 00:17:02.958 "cntlid": 63, 00:17:02.958 "qid": 0, 00:17:02.958 "state": "enabled", 00:17:02.958 "thread": "nvmf_tgt_poll_group_000", 00:17:02.958 "listen_address": { 00:17:02.958 "trtype": "TCP", 00:17:02.958 "adrfam": "IPv4", 00:17:02.958 "traddr": "10.0.0.2", 00:17:02.958 "trsvcid": "4420" 00:17:02.958 }, 00:17:02.958 "peer_address": { 00:17:02.959 "trtype": "TCP", 00:17:02.959 "adrfam": "IPv4", 00:17:02.959 "traddr": "10.0.0.1", 00:17:02.959 "trsvcid": "39948" 00:17:02.959 }, 00:17:02.959 "auth": { 00:17:02.959 "state": "completed", 00:17:02.959 "digest": "sha384", 00:17:02.959 "dhgroup": "ffdhe2048" 00:17:02.959 } 00:17:02.959 } 00:17:02.959 ]' 00:17:02.959 13:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.959 13:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.959 13:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.959 13:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:02.959 13:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.959 13:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.959 13:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.959 13:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.215 13:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MDlhZmU4ZTRiZTcwMzBiMGJiZTJhNWU5ZWQyMDJmNTE5NDI0ZmQ0M2NjZTkxNjNkMDJlYzg2MTcxY2I1OTA5NrD6o+E=: 00:17:04.149 13:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.149 13:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:04.149 13:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.149 13:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.149 13:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.149 13:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.149 13:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.149 13:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:04.149 13:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:04.407 13:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:04.407 13:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.407 13:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:04.407 13:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:04.407 13:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:04.407 13:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.407 13:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.407 13:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.407 13:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.407 13:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.407 13:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.407 13:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.664 00:17:04.664 13:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.664 13:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.664 13:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.921 13:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.921 13:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.921 13:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.921 13:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.921 13:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.922 13:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.922 { 00:17:04.922 "cntlid": 65, 00:17:04.922 "qid": 0, 00:17:04.922 "state": "enabled", 00:17:04.922 "thread": "nvmf_tgt_poll_group_000", 00:17:04.922 "listen_address": { 00:17:04.922 "trtype": "TCP", 00:17:04.922 "adrfam": "IPv4", 00:17:04.922 "traddr": "10.0.0.2", 00:17:04.922 "trsvcid": "4420" 00:17:04.922 }, 00:17:04.922 "peer_address": { 00:17:04.922 "trtype": "TCP", 00:17:04.922 "adrfam": "IPv4", 00:17:04.922 "traddr": "10.0.0.1", 00:17:04.922 "trsvcid": "39966" 00:17:04.922 }, 00:17:04.922 "auth": { 00:17:04.922 "state": "completed", 00:17:04.922 "digest": "sha384", 00:17:04.922 "dhgroup": "ffdhe3072" 00:17:04.922 } 00:17:04.922 } 00:17:04.922 ]' 00:17:04.922 13:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.922 13:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.922 13:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.179 13:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:05.179 13:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.179 13:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.179 13:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.179 13:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.436 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Nzc1ZjI1MDM1NTRmYjVjMDJlOTAwZTAxM2U1NzM2YTZkMzAxNDc5ZTRhMjRkZjcyhKo8vA==: --dhchap-ctrl-secret DHHC-1:03:MmJlYmRhNmM0YTFkOTFjOTg1ZmEwOTZmYTMxOWIzN2EzYWFjYWIwNTFmMTEwNWIxNzFiNmIyMmU3OTJiMGFiOfuiRg8=: 00:17:06.369 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.369 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:06.369 13:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.369 13:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.369 13:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.369 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.369 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:06.369 13:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:06.627 13:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:06.627 13:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.627 13:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:06.627 13:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:06.627 13:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:06.627 13:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.627 13:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.627 13:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.627 13:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.627 13:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.627 13:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.627 13:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.885 00:17:06.885 13:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.885 13:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.885 13:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.142 13:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.142 13:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.142 13:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.142 13:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.142 13:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.142 13:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.142 { 00:17:07.142 "cntlid": 67, 00:17:07.142 "qid": 0, 00:17:07.142 "state": "enabled", 00:17:07.142 "thread": "nvmf_tgt_poll_group_000", 00:17:07.142 "listen_address": { 00:17:07.142 "trtype": "TCP", 00:17:07.142 "adrfam": "IPv4", 00:17:07.142 "traddr": "10.0.0.2", 00:17:07.142 "trsvcid": "4420" 00:17:07.142 }, 00:17:07.142 "peer_address": { 00:17:07.142 "trtype": "TCP", 00:17:07.142 "adrfam": "IPv4", 00:17:07.142 "traddr": "10.0.0.1", 00:17:07.142 "trsvcid": "34432" 00:17:07.142 }, 00:17:07.142 "auth": { 00:17:07.142 "state": "completed", 00:17:07.142 "digest": "sha384", 00:17:07.142 "dhgroup": "ffdhe3072" 00:17:07.142 } 00:17:07.142 } 00:17:07.142 ]' 00:17:07.142 13:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.142 13:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.142 13:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.142 13:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:07.142 13:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.142 13:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.142 13:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.142 13:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.399 13:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YTI1MTEzZjM0NjFkNTkwYjUwNjc5ODlmOTNhZjc3NTFLAk7g: --dhchap-ctrl-secret DHHC-1:02:NjMxYTExZjU0NmQ1NTlhYjcwZDhkODc5NTk3ZjI1OTliZjJlZWFmNTdhNjdlYmRlkGfA3A==: 00:17:08.333 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.333 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:08.333 13:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.333 13:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.333 13:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.333 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.333 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:08.333 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:08.590 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:08.590 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.590 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:08.590 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:08.590 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:08.590 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.590 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.590 13:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.590 13:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.590 13:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.590 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.590 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.847 00:17:08.847 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.847 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.847 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.105 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.105 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.105 13:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.105 13:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.105 13:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.105 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.105 { 00:17:09.105 "cntlid": 69, 00:17:09.105 "qid": 0, 00:17:09.105 "state": "enabled", 00:17:09.105 "thread": "nvmf_tgt_poll_group_000", 00:17:09.105 "listen_address": { 00:17:09.105 "trtype": "TCP", 00:17:09.105 "adrfam": "IPv4", 00:17:09.105 "traddr": "10.0.0.2", 00:17:09.105 "trsvcid": "4420" 00:17:09.105 }, 00:17:09.105 "peer_address": { 00:17:09.105 "trtype": "TCP", 00:17:09.105 "adrfam": "IPv4", 00:17:09.105 "traddr": "10.0.0.1", 00:17:09.105 "trsvcid": "34462" 00:17:09.105 }, 00:17:09.105 "auth": { 00:17:09.105 "state": "completed", 00:17:09.105 "digest": "sha384", 00:17:09.105 "dhgroup": "ffdhe3072" 00:17:09.105 } 00:17:09.105 } 00:17:09.105 ]' 00:17:09.105 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.363 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.363 13:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.363 13:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:09.363 13:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.363 13:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.363 13:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.363 13:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.621 13:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OWRhMmY3NmQxYmFmZTdlZDRlOTkzODdjN2JhNDI4ZGVhMmFlN2I2MTcyNThjNDMw/g+/BQ==: --dhchap-ctrl-secret DHHC-1:01:MTI0NWE4YzEzY2UzYmU1NWVhNmFjNWUwYTU1NWI3M2TBB42v: 00:17:10.554 13:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.554 13:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:10.554 13:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.554 13:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.554 13:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.554 13:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:10.554 13:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:10.554 13:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:10.812 13:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:10.812 13:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.812 13:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:10.812 13:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:10.812 13:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:10.812 13:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.812 13:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:10.812 13:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.812 13:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.812 13:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.812 13:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.812 13:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:11.070 00:17:11.070 13:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.070 13:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.070 13:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.328 13:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.328 13:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.328 13:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.328 13:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.328 13:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.328 13:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.328 { 00:17:11.328 "cntlid": 71, 00:17:11.328 "qid": 0, 00:17:11.328 "state": "enabled", 00:17:11.328 "thread": "nvmf_tgt_poll_group_000", 00:17:11.328 "listen_address": { 00:17:11.328 "trtype": "TCP", 00:17:11.328 "adrfam": "IPv4", 00:17:11.328 "traddr": "10.0.0.2", 00:17:11.328 "trsvcid": "4420" 00:17:11.328 }, 00:17:11.328 "peer_address": { 00:17:11.328 "trtype": "TCP", 00:17:11.328 "adrfam": "IPv4", 00:17:11.328 "traddr": "10.0.0.1", 00:17:11.328 "trsvcid": "34486" 00:17:11.328 }, 00:17:11.328 "auth": { 00:17:11.328 "state": "completed", 00:17:11.328 "digest": "sha384", 00:17:11.328 "dhgroup": "ffdhe3072" 00:17:11.328 } 00:17:11.328 } 00:17:11.328 ]' 00:17:11.328 13:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.328 13:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.328 13:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.328 13:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:11.328 13:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.597 13:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.597 13:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.597 13:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.866 13:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MDlhZmU4ZTRiZTcwMzBiMGJiZTJhNWU5ZWQyMDJmNTE5NDI0ZmQ0M2NjZTkxNjNkMDJlYzg2MTcxY2I1OTA5NrD6o+E=: 00:17:12.799 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.799 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:12.799 13:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.799 13:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.799 13:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.799 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.799 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.799 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:12.799 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:12.799 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:12.799 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:12.799 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:12.799 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:12.799 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:12.799 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.799 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.799 13:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.799 13:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.056 13:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.056 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.056 13:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.313 00:17:13.313 13:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.313 13:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.313 13:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.571 13:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.571 13:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.571 13:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.571 13:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.571 13:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.571 13:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.571 { 00:17:13.571 "cntlid": 73, 00:17:13.571 "qid": 0, 00:17:13.571 "state": "enabled", 00:17:13.571 "thread": "nvmf_tgt_poll_group_000", 00:17:13.571 "listen_address": { 00:17:13.571 "trtype": "TCP", 00:17:13.571 "adrfam": "IPv4", 00:17:13.571 "traddr": "10.0.0.2", 00:17:13.571 "trsvcid": "4420" 00:17:13.571 }, 00:17:13.571 "peer_address": { 00:17:13.571 "trtype": "TCP", 00:17:13.571 "adrfam": "IPv4", 00:17:13.571 "traddr": "10.0.0.1", 00:17:13.571 "trsvcid": "34520" 00:17:13.571 }, 00:17:13.571 "auth": { 00:17:13.571 "state": "completed", 00:17:13.571 "digest": "sha384", 00:17:13.571 "dhgroup": "ffdhe4096" 00:17:13.571 } 00:17:13.571 } 00:17:13.571 ]' 00:17:13.571 13:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.571 13:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.571 13:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.571 13:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:13.571 13:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.828 13:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.828 13:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.828 13:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.087 13:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Nzc1ZjI1MDM1NTRmYjVjMDJlOTAwZTAxM2U1NzM2YTZkMzAxNDc5ZTRhMjRkZjcyhKo8vA==: --dhchap-ctrl-secret DHHC-1:03:MmJlYmRhNmM0YTFkOTFjOTg1ZmEwOTZmYTMxOWIzN2EzYWFjYWIwNTFmMTEwNWIxNzFiNmIyMmU3OTJiMGFiOfuiRg8=: 00:17:15.030 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.030 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:15.030 13:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.030 13:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.030 13:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.030 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.030 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:15.030 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:15.030 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:15.030 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.030 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:15.030 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:15.030 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:15.030 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.030 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.030 13:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.030 13:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.030 13:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.030 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.030 13:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.625 00:17:15.625 13:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.625 13:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.625 13:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:15.625 13:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.625 13:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.625 13:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.625 13:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.625 13:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.625 13:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:15.625 { 00:17:15.625 "cntlid": 75, 00:17:15.625 "qid": 0, 00:17:15.625 "state": "enabled", 00:17:15.625 "thread": "nvmf_tgt_poll_group_000", 00:17:15.625 "listen_address": { 00:17:15.625 "trtype": "TCP", 00:17:15.625 "adrfam": "IPv4", 00:17:15.625 "traddr": "10.0.0.2", 00:17:15.625 "trsvcid": "4420" 00:17:15.625 }, 00:17:15.625 "peer_address": { 00:17:15.625 "trtype": "TCP", 00:17:15.625 "adrfam": "IPv4", 00:17:15.625 "traddr": "10.0.0.1", 00:17:15.625 "trsvcid": "34538" 00:17:15.625 }, 00:17:15.625 "auth": { 00:17:15.625 "state": "completed", 00:17:15.625 "digest": "sha384", 00:17:15.625 "dhgroup": "ffdhe4096" 00:17:15.625 } 00:17:15.625 } 00:17:15.625 ]' 00:17:15.625 13:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:15.886 13:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.886 13:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:15.886 13:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:15.886 13:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:15.886 13:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.886 13:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.886 13:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.145 13:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YTI1MTEzZjM0NjFkNTkwYjUwNjc5ODlmOTNhZjc3NTFLAk7g: --dhchap-ctrl-secret DHHC-1:02:NjMxYTExZjU0NmQ1NTlhYjcwZDhkODc5NTk3ZjI1OTliZjJlZWFmNTdhNjdlYmRlkGfA3A==: 00:17:17.079 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.079 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:17.079 13:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.079 13:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.079 13:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.079 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.079 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:17.080 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:17.337 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:17.337 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.337 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:17.337 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:17.337 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:17.337 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.337 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.337 13:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.337 13:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.337 13:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.337 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.337 13:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.595 00:17:17.595 13:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.595 13:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:17.595 13:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.853 13:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.853 13:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.853 13:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.853 13:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.853 13:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.853 13:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:17.853 { 00:17:17.853 "cntlid": 77, 00:17:17.853 "qid": 0, 00:17:17.853 "state": "enabled", 00:17:17.853 "thread": "nvmf_tgt_poll_group_000", 00:17:17.853 "listen_address": { 00:17:17.853 "trtype": "TCP", 00:17:17.853 "adrfam": "IPv4", 00:17:17.853 "traddr": "10.0.0.2", 00:17:17.853 "trsvcid": "4420" 00:17:17.853 }, 00:17:17.853 "peer_address": { 00:17:17.853 "trtype": "TCP", 00:17:17.853 "adrfam": "IPv4", 00:17:17.853 "traddr": "10.0.0.1", 00:17:17.853 "trsvcid": "56594" 00:17:17.853 }, 00:17:17.853 "auth": { 00:17:17.853 "state": "completed", 00:17:17.853 "digest": "sha384", 00:17:17.853 "dhgroup": "ffdhe4096" 00:17:17.853 } 00:17:17.853 } 00:17:17.853 ]' 00:17:17.853 13:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:17.853 13:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.853 13:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:17.853 13:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:17.853 13:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.111 13:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.111 13:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.111 13:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.369 13:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OWRhMmY3NmQxYmFmZTdlZDRlOTkzODdjN2JhNDI4ZGVhMmFlN2I2MTcyNThjNDMw/g+/BQ==: --dhchap-ctrl-secret DHHC-1:01:MTI0NWE4YzEzY2UzYmU1NWVhNmFjNWUwYTU1NWI3M2TBB42v: 00:17:19.307 13:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.307 13:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:19.307 13:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.307 13:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.307 13:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.307 13:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.307 13:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:19.307 13:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:19.307 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:19.307 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.307 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:19.307 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:19.307 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:19.307 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.307 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:19.307 13:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.307 13:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.307 13:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.307 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:19.307 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:19.874 00:17:19.874 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.874 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:19.874 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.874 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.874 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.874 13:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.874 13:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.132 13:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.132 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.132 { 00:17:20.133 "cntlid": 79, 00:17:20.133 "qid": 0, 00:17:20.133 "state": "enabled", 00:17:20.133 "thread": "nvmf_tgt_poll_group_000", 00:17:20.133 "listen_address": { 00:17:20.133 "trtype": "TCP", 00:17:20.133 "adrfam": "IPv4", 00:17:20.133 "traddr": "10.0.0.2", 00:17:20.133 "trsvcid": "4420" 00:17:20.133 }, 00:17:20.133 "peer_address": { 00:17:20.133 "trtype": "TCP", 00:17:20.133 "adrfam": "IPv4", 00:17:20.133 "traddr": "10.0.0.1", 00:17:20.133 "trsvcid": "56612" 00:17:20.133 }, 00:17:20.133 "auth": { 00:17:20.133 "state": "completed", 00:17:20.133 "digest": "sha384", 00:17:20.133 "dhgroup": "ffdhe4096" 00:17:20.133 } 00:17:20.133 } 00:17:20.133 ]' 00:17:20.133 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.133 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.133 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.133 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:20.133 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.133 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.133 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.133 13:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.390 13:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MDlhZmU4ZTRiZTcwMzBiMGJiZTJhNWU5ZWQyMDJmNTE5NDI0ZmQ0M2NjZTkxNjNkMDJlYzg2MTcxY2I1OTA5NrD6o+E=: 00:17:21.329 13:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.329 13:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:21.329 13:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.329 13:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.329 13:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.329 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.329 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.329 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:21.329 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:21.587 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:21.587 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.587 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:21.587 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:21.587 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:21.587 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.587 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.587 13:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.587 13:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.587 13:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.587 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.587 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.154 00:17:22.154 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.154 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.154 13:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.411 13:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.411 13:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.411 13:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.411 13:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.411 13:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.411 13:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.411 { 00:17:22.411 "cntlid": 81, 00:17:22.411 "qid": 0, 00:17:22.411 "state": "enabled", 00:17:22.411 "thread": "nvmf_tgt_poll_group_000", 00:17:22.411 "listen_address": { 00:17:22.411 "trtype": "TCP", 00:17:22.411 "adrfam": "IPv4", 00:17:22.411 "traddr": "10.0.0.2", 00:17:22.411 "trsvcid": "4420" 00:17:22.411 }, 00:17:22.411 "peer_address": { 00:17:22.411 "trtype": "TCP", 00:17:22.411 "adrfam": "IPv4", 00:17:22.411 "traddr": "10.0.0.1", 00:17:22.411 "trsvcid": "56638" 00:17:22.411 }, 00:17:22.411 "auth": { 00:17:22.411 "state": "completed", 00:17:22.411 "digest": "sha384", 00:17:22.411 "dhgroup": "ffdhe6144" 00:17:22.411 } 00:17:22.411 } 00:17:22.411 ]' 00:17:22.411 13:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.411 13:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.411 13:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.411 13:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:22.411 13:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.411 13:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.411 13:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.411 13:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.977 13:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Nzc1ZjI1MDM1NTRmYjVjMDJlOTAwZTAxM2U1NzM2YTZkMzAxNDc5ZTRhMjRkZjcyhKo8vA==: --dhchap-ctrl-secret DHHC-1:03:MmJlYmRhNmM0YTFkOTFjOTg1ZmEwOTZmYTMxOWIzN2EzYWFjYWIwNTFmMTEwNWIxNzFiNmIyMmU3OTJiMGFiOfuiRg8=: 00:17:23.545 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.545 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:23.545 13:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.545 13:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.545 13:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.545 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.545 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:23.545 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:24.131 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:24.131 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.131 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:24.131 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:24.131 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:24.131 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.131 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.131 13:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.131 13:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.131 13:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.131 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.131 13:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.388 00:17:24.388 13:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.388 13:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.388 13:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.644 13:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.644 13:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.644 13:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.644 13:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.644 13:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.644 13:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.644 { 00:17:24.644 "cntlid": 83, 00:17:24.644 "qid": 0, 00:17:24.644 "state": "enabled", 00:17:24.644 "thread": "nvmf_tgt_poll_group_000", 00:17:24.644 "listen_address": { 00:17:24.644 "trtype": "TCP", 00:17:24.644 "adrfam": "IPv4", 00:17:24.644 "traddr": "10.0.0.2", 00:17:24.644 "trsvcid": "4420" 00:17:24.644 }, 00:17:24.644 "peer_address": { 00:17:24.644 "trtype": "TCP", 00:17:24.645 "adrfam": "IPv4", 00:17:24.645 "traddr": "10.0.0.1", 00:17:24.645 "trsvcid": "56672" 00:17:24.645 }, 00:17:24.645 "auth": { 00:17:24.645 "state": "completed", 00:17:24.645 "digest": "sha384", 00:17:24.645 "dhgroup": "ffdhe6144" 00:17:24.645 } 00:17:24.645 } 00:17:24.645 ]' 00:17:24.645 13:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.902 13:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.902 13:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.902 13:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:24.902 13:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.902 13:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.902 13:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.902 13:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.160 13:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YTI1MTEzZjM0NjFkNTkwYjUwNjc5ODlmOTNhZjc3NTFLAk7g: --dhchap-ctrl-secret DHHC-1:02:NjMxYTExZjU0NmQ1NTlhYjcwZDhkODc5NTk3ZjI1OTliZjJlZWFmNTdhNjdlYmRlkGfA3A==: 00:17:26.092 13:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.092 13:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:26.092 13:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.092 13:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.092 13:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.092 13:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.092 13:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:26.092 13:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:26.349 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:26.349 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.349 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:26.349 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:26.349 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:26.349 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.349 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.349 13:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.349 13:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.349 13:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.349 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.349 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.916 00:17:26.916 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.916 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.916 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.172 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.172 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.172 13:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.172 13:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.172 13:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.172 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.172 { 00:17:27.172 "cntlid": 85, 00:17:27.172 "qid": 0, 00:17:27.172 "state": "enabled", 00:17:27.172 "thread": "nvmf_tgt_poll_group_000", 00:17:27.172 "listen_address": { 00:17:27.172 "trtype": "TCP", 00:17:27.172 "adrfam": "IPv4", 00:17:27.172 "traddr": "10.0.0.2", 00:17:27.172 "trsvcid": "4420" 00:17:27.172 }, 00:17:27.172 "peer_address": { 00:17:27.173 "trtype": "TCP", 00:17:27.173 "adrfam": "IPv4", 00:17:27.173 "traddr": "10.0.0.1", 00:17:27.173 "trsvcid": "41142" 00:17:27.173 }, 00:17:27.173 "auth": { 00:17:27.173 "state": "completed", 00:17:27.173 "digest": "sha384", 00:17:27.173 "dhgroup": "ffdhe6144" 00:17:27.173 } 00:17:27.173 } 00:17:27.173 ]' 00:17:27.173 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.173 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.173 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.173 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:27.173 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.173 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.173 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.173 13:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.430 13:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OWRhMmY3NmQxYmFmZTdlZDRlOTkzODdjN2JhNDI4ZGVhMmFlN2I2MTcyNThjNDMw/g+/BQ==: --dhchap-ctrl-secret DHHC-1:01:MTI0NWE4YzEzY2UzYmU1NWVhNmFjNWUwYTU1NWI3M2TBB42v: 00:17:28.362 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.362 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:28.362 13:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.362 13:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.362 13:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.362 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.362 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:28.362 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:28.620 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:28.620 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.620 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:28.620 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:28.620 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:28.620 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.620 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:28.620 13:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.620 13:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.620 13:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.620 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:28.620 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.186 00:17:29.186 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.186 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.186 13:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.456 13:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.456 13:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.456 13:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.456 13:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.456 13:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.456 13:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.456 { 00:17:29.456 "cntlid": 87, 00:17:29.456 "qid": 0, 00:17:29.456 "state": "enabled", 00:17:29.456 "thread": "nvmf_tgt_poll_group_000", 00:17:29.456 "listen_address": { 00:17:29.456 "trtype": "TCP", 00:17:29.456 "adrfam": "IPv4", 00:17:29.456 "traddr": "10.0.0.2", 00:17:29.456 "trsvcid": "4420" 00:17:29.456 }, 00:17:29.456 "peer_address": { 00:17:29.456 "trtype": "TCP", 00:17:29.456 "adrfam": "IPv4", 00:17:29.456 "traddr": "10.0.0.1", 00:17:29.456 "trsvcid": "41168" 00:17:29.456 }, 00:17:29.456 "auth": { 00:17:29.456 "state": "completed", 00:17:29.456 "digest": "sha384", 00:17:29.457 "dhgroup": "ffdhe6144" 00:17:29.457 } 00:17:29.457 } 00:17:29.457 ]' 00:17:29.457 13:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.457 13:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.457 13:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:29.457 13:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:29.457 13:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.457 13:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.457 13:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.457 13:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.023 13:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MDlhZmU4ZTRiZTcwMzBiMGJiZTJhNWU5ZWQyMDJmNTE5NDI0ZmQ0M2NjZTkxNjNkMDJlYzg2MTcxY2I1OTA5NrD6o+E=: 00:17:30.956 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.956 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:30.956 13:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.956 13:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.956 13:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.956 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.956 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:30.956 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:30.956 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:30.956 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:30.956 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:30.956 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:30.956 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:30.956 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:30.956 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.956 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.956 13:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.956 13:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.956 13:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.956 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.956 13:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.892 00:17:31.892 13:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.892 13:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.892 13:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.150 13:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.150 13:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.150 13:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.150 13:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.150 13:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.150 13:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.150 { 00:17:32.150 "cntlid": 89, 00:17:32.150 "qid": 0, 00:17:32.150 "state": "enabled", 00:17:32.150 "thread": "nvmf_tgt_poll_group_000", 00:17:32.150 "listen_address": { 00:17:32.150 "trtype": "TCP", 00:17:32.150 "adrfam": "IPv4", 00:17:32.150 "traddr": "10.0.0.2", 00:17:32.150 "trsvcid": "4420" 00:17:32.150 }, 00:17:32.150 "peer_address": { 00:17:32.150 "trtype": "TCP", 00:17:32.150 "adrfam": "IPv4", 00:17:32.150 "traddr": "10.0.0.1", 00:17:32.150 "trsvcid": "41196" 00:17:32.150 }, 00:17:32.150 "auth": { 00:17:32.150 "state": "completed", 00:17:32.150 "digest": "sha384", 00:17:32.150 "dhgroup": "ffdhe8192" 00:17:32.150 } 00:17:32.150 } 00:17:32.150 ]' 00:17:32.150 13:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.150 13:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.150 13:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.150 13:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:32.150 13:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.150 13:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.150 13:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.150 13:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.408 13:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Nzc1ZjI1MDM1NTRmYjVjMDJlOTAwZTAxM2U1NzM2YTZkMzAxNDc5ZTRhMjRkZjcyhKo8vA==: --dhchap-ctrl-secret DHHC-1:03:MmJlYmRhNmM0YTFkOTFjOTg1ZmEwOTZmYTMxOWIzN2EzYWFjYWIwNTFmMTEwNWIxNzFiNmIyMmU3OTJiMGFiOfuiRg8=: 00:17:33.344 13:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.344 13:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:33.344 13:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.344 13:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.344 13:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.344 13:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.344 13:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:33.344 13:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:33.602 13:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:33.602 13:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.602 13:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:33.602 13:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:33.602 13:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:33.602 13:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.602 13:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.602 13:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.602 13:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.602 13:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.602 13:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.602 13:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.534 00:17:34.534 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.534 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.534 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.791 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.791 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.791 13:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.791 13:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.791 13:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.791 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.791 { 00:17:34.791 "cntlid": 91, 00:17:34.791 "qid": 0, 00:17:34.791 "state": "enabled", 00:17:34.791 "thread": "nvmf_tgt_poll_group_000", 00:17:34.791 "listen_address": { 00:17:34.791 "trtype": "TCP", 00:17:34.791 "adrfam": "IPv4", 00:17:34.791 "traddr": "10.0.0.2", 00:17:34.791 "trsvcid": "4420" 00:17:34.791 }, 00:17:34.791 "peer_address": { 00:17:34.791 "trtype": "TCP", 00:17:34.791 "adrfam": "IPv4", 00:17:34.791 "traddr": "10.0.0.1", 00:17:34.791 "trsvcid": "41228" 00:17:34.791 }, 00:17:34.791 "auth": { 00:17:34.791 "state": "completed", 00:17:34.791 "digest": "sha384", 00:17:34.791 "dhgroup": "ffdhe8192" 00:17:34.791 } 00:17:34.791 } 00:17:34.791 ]' 00:17:34.791 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.791 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.791 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.791 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:34.791 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.791 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.791 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.791 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.049 13:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YTI1MTEzZjM0NjFkNTkwYjUwNjc5ODlmOTNhZjc3NTFLAk7g: --dhchap-ctrl-secret DHHC-1:02:NjMxYTExZjU0NmQ1NTlhYjcwZDhkODc5NTk3ZjI1OTliZjJlZWFmNTdhNjdlYmRlkGfA3A==: 00:17:35.982 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.982 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:35.982 13:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.982 13:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.982 13:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.982 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.982 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:35.982 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:36.239 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:36.239 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.239 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:36.239 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:36.239 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:36.239 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.239 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.239 13:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.239 13:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.239 13:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.239 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.239 13:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.172 00:17:37.172 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.172 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.172 13:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.430 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.430 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.430 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.430 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.430 13:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.430 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.430 { 00:17:37.430 "cntlid": 93, 00:17:37.430 "qid": 0, 00:17:37.430 "state": "enabled", 00:17:37.430 "thread": "nvmf_tgt_poll_group_000", 00:17:37.430 "listen_address": { 00:17:37.430 "trtype": "TCP", 00:17:37.430 "adrfam": "IPv4", 00:17:37.430 "traddr": "10.0.0.2", 00:17:37.430 "trsvcid": "4420" 00:17:37.430 }, 00:17:37.430 "peer_address": { 00:17:37.430 "trtype": "TCP", 00:17:37.430 "adrfam": "IPv4", 00:17:37.430 "traddr": "10.0.0.1", 00:17:37.430 "trsvcid": "39176" 00:17:37.430 }, 00:17:37.430 "auth": { 00:17:37.430 "state": "completed", 00:17:37.430 "digest": "sha384", 00:17:37.430 "dhgroup": "ffdhe8192" 00:17:37.430 } 00:17:37.430 } 00:17:37.430 ]' 00:17:37.430 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.430 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.430 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:37.430 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:37.430 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:37.430 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.430 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.430 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.687 13:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OWRhMmY3NmQxYmFmZTdlZDRlOTkzODdjN2JhNDI4ZGVhMmFlN2I2MTcyNThjNDMw/g+/BQ==: --dhchap-ctrl-secret DHHC-1:01:MTI0NWE4YzEzY2UzYmU1NWVhNmFjNWUwYTU1NWI3M2TBB42v: 00:17:38.622 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.622 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:38.622 13:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.622 13:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.622 13:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.622 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.622 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:38.622 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:38.879 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:38.879 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.879 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:38.879 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:38.879 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:38.879 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.879 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:38.879 13:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.879 13:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.879 13:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.879 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:38.879 13:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:39.817 00:17:39.817 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.817 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.817 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.817 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.817 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.817 13:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.817 13:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.075 13:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.075 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.075 { 00:17:40.075 "cntlid": 95, 00:17:40.075 "qid": 0, 00:17:40.075 "state": "enabled", 00:17:40.075 "thread": "nvmf_tgt_poll_group_000", 00:17:40.075 "listen_address": { 00:17:40.075 "trtype": "TCP", 00:17:40.075 "adrfam": "IPv4", 00:17:40.075 "traddr": "10.0.0.2", 00:17:40.075 "trsvcid": "4420" 00:17:40.075 }, 00:17:40.075 "peer_address": { 00:17:40.075 "trtype": "TCP", 00:17:40.075 "adrfam": "IPv4", 00:17:40.075 "traddr": "10.0.0.1", 00:17:40.075 "trsvcid": "39210" 00:17:40.075 }, 00:17:40.075 "auth": { 00:17:40.075 "state": "completed", 00:17:40.075 "digest": "sha384", 00:17:40.075 "dhgroup": "ffdhe8192" 00:17:40.075 } 00:17:40.075 } 00:17:40.075 ]' 00:17:40.075 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.075 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:40.075 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.075 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:40.075 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.075 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.075 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.075 13:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.333 13:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MDlhZmU4ZTRiZTcwMzBiMGJiZTJhNWU5ZWQyMDJmNTE5NDI0ZmQ0M2NjZTkxNjNkMDJlYzg2MTcxY2I1OTA5NrD6o+E=: 00:17:41.270 13:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.270 13:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:41.270 13:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.270 13:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.270 13:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.270 13:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:41.270 13:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.270 13:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.270 13:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:41.270 13:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:41.530 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:41.530 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.530 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:41.530 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:41.530 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:41.530 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.530 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.530 13:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.530 13:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.530 13:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.530 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.530 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.788 00:17:41.788 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.788 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.788 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.046 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.046 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.046 13:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.046 13:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.046 13:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.046 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.046 { 00:17:42.046 "cntlid": 97, 00:17:42.046 "qid": 0, 00:17:42.046 "state": "enabled", 00:17:42.046 "thread": "nvmf_tgt_poll_group_000", 00:17:42.046 "listen_address": { 00:17:42.046 "trtype": "TCP", 00:17:42.046 "adrfam": "IPv4", 00:17:42.046 "traddr": "10.0.0.2", 00:17:42.046 "trsvcid": "4420" 00:17:42.046 }, 00:17:42.046 "peer_address": { 00:17:42.046 "trtype": "TCP", 00:17:42.046 "adrfam": "IPv4", 00:17:42.046 "traddr": "10.0.0.1", 00:17:42.046 "trsvcid": "39222" 00:17:42.046 }, 00:17:42.046 "auth": { 00:17:42.046 "state": "completed", 00:17:42.046 "digest": "sha512", 00:17:42.046 "dhgroup": "null" 00:17:42.046 } 00:17:42.046 } 00:17:42.046 ]' 00:17:42.046 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.046 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.046 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.046 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:42.046 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.046 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.046 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.046 13:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.303 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Nzc1ZjI1MDM1NTRmYjVjMDJlOTAwZTAxM2U1NzM2YTZkMzAxNDc5ZTRhMjRkZjcyhKo8vA==: --dhchap-ctrl-secret DHHC-1:03:MmJlYmRhNmM0YTFkOTFjOTg1ZmEwOTZmYTMxOWIzN2EzYWFjYWIwNTFmMTEwNWIxNzFiNmIyMmU3OTJiMGFiOfuiRg8=: 00:17:43.240 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.240 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:43.240 13:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.240 13:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.240 13:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.240 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.240 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:43.240 13:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:43.497 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:43.497 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.497 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:43.497 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:43.497 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:43.497 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.497 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.497 13:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.497 13:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.497 13:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.497 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.497 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.754 00:17:43.754 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.754 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.754 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.012 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.012 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.012 13:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.012 13:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.012 13:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.012 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.012 { 00:17:44.012 "cntlid": 99, 00:17:44.012 "qid": 0, 00:17:44.012 "state": "enabled", 00:17:44.012 "thread": "nvmf_tgt_poll_group_000", 00:17:44.012 "listen_address": { 00:17:44.012 "trtype": "TCP", 00:17:44.012 "adrfam": "IPv4", 00:17:44.012 "traddr": "10.0.0.2", 00:17:44.012 "trsvcid": "4420" 00:17:44.012 }, 00:17:44.012 "peer_address": { 00:17:44.012 "trtype": "TCP", 00:17:44.012 "adrfam": "IPv4", 00:17:44.012 "traddr": "10.0.0.1", 00:17:44.012 "trsvcid": "39266" 00:17:44.012 }, 00:17:44.012 "auth": { 00:17:44.012 "state": "completed", 00:17:44.012 "digest": "sha512", 00:17:44.012 "dhgroup": "null" 00:17:44.012 } 00:17:44.012 } 00:17:44.012 ]' 00:17:44.012 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.270 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.270 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.270 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:44.270 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.270 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.270 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.270 13:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.533 13:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YTI1MTEzZjM0NjFkNTkwYjUwNjc5ODlmOTNhZjc3NTFLAk7g: --dhchap-ctrl-secret DHHC-1:02:NjMxYTExZjU0NmQ1NTlhYjcwZDhkODc5NTk3ZjI1OTliZjJlZWFmNTdhNjdlYmRlkGfA3A==: 00:17:45.468 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.468 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:45.468 13:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.468 13:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.468 13:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.468 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.468 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:45.468 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:45.725 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:45.725 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.725 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:45.725 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:45.725 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:45.725 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.725 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.725 13:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.725 13:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.725 13:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.725 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.725 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.984 00:17:46.241 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.241 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.241 13:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.241 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.241 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.241 13:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.241 13:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.498 13:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.498 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.498 { 00:17:46.498 "cntlid": 101, 00:17:46.498 "qid": 0, 00:17:46.498 "state": "enabled", 00:17:46.498 "thread": "nvmf_tgt_poll_group_000", 00:17:46.498 "listen_address": { 00:17:46.498 "trtype": "TCP", 00:17:46.498 "adrfam": "IPv4", 00:17:46.498 "traddr": "10.0.0.2", 00:17:46.498 "trsvcid": "4420" 00:17:46.498 }, 00:17:46.498 "peer_address": { 00:17:46.498 "trtype": "TCP", 00:17:46.498 "adrfam": "IPv4", 00:17:46.498 "traddr": "10.0.0.1", 00:17:46.498 "trsvcid": "54538" 00:17:46.498 }, 00:17:46.498 "auth": { 00:17:46.498 "state": "completed", 00:17:46.498 "digest": "sha512", 00:17:46.498 "dhgroup": "null" 00:17:46.498 } 00:17:46.498 } 00:17:46.498 ]' 00:17:46.498 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.498 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.498 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.498 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:46.498 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.498 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.498 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.498 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.755 13:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OWRhMmY3NmQxYmFmZTdlZDRlOTkzODdjN2JhNDI4ZGVhMmFlN2I2MTcyNThjNDMw/g+/BQ==: --dhchap-ctrl-secret DHHC-1:01:MTI0NWE4YzEzY2UzYmU1NWVhNmFjNWUwYTU1NWI3M2TBB42v: 00:17:47.690 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.690 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:47.690 13:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.690 13:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.690 13:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.690 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.690 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:47.690 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:47.949 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:47.949 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.949 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:47.949 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:47.949 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:47.949 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.949 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:47.949 13:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.949 13:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.949 13:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.949 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:47.949 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:48.237 00:17:48.237 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.237 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.237 13:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.518 13:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.518 13:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.518 13:56:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.518 13:56:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.518 13:56:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.518 13:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.518 { 00:17:48.518 "cntlid": 103, 00:17:48.518 "qid": 0, 00:17:48.518 "state": "enabled", 00:17:48.518 "thread": "nvmf_tgt_poll_group_000", 00:17:48.518 "listen_address": { 00:17:48.518 "trtype": "TCP", 00:17:48.518 "adrfam": "IPv4", 00:17:48.518 "traddr": "10.0.0.2", 00:17:48.518 "trsvcid": "4420" 00:17:48.518 }, 00:17:48.518 "peer_address": { 00:17:48.518 "trtype": "TCP", 00:17:48.518 "adrfam": "IPv4", 00:17:48.518 "traddr": "10.0.0.1", 00:17:48.518 "trsvcid": "54576" 00:17:48.518 }, 00:17:48.518 "auth": { 00:17:48.518 "state": "completed", 00:17:48.518 "digest": "sha512", 00:17:48.518 "dhgroup": "null" 00:17:48.518 } 00:17:48.518 } 00:17:48.518 ]' 00:17:48.519 13:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.519 13:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.519 13:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.519 13:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:48.519 13:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.519 13:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.519 13:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.519 13:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.777 13:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MDlhZmU4ZTRiZTcwMzBiMGJiZTJhNWU5ZWQyMDJmNTE5NDI0ZmQ0M2NjZTkxNjNkMDJlYzg2MTcxY2I1OTA5NrD6o+E=: 00:17:49.710 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.710 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:49.710 13:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.710 13:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.710 13:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.710 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.710 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.710 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:49.710 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:49.999 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:49.999 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.999 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:49.999 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:49.999 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:49.999 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.999 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.999 13:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.999 13:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.999 13:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.999 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.999 13:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.257 00:17:50.257 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.257 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.257 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.515 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.515 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.515 13:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.515 13:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.515 13:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.515 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.515 { 00:17:50.515 "cntlid": 105, 00:17:50.515 "qid": 0, 00:17:50.515 "state": "enabled", 00:17:50.515 "thread": "nvmf_tgt_poll_group_000", 00:17:50.515 "listen_address": { 00:17:50.515 "trtype": "TCP", 00:17:50.515 "adrfam": "IPv4", 00:17:50.515 "traddr": "10.0.0.2", 00:17:50.515 "trsvcid": "4420" 00:17:50.515 }, 00:17:50.515 "peer_address": { 00:17:50.515 "trtype": "TCP", 00:17:50.515 "adrfam": "IPv4", 00:17:50.515 "traddr": "10.0.0.1", 00:17:50.515 "trsvcid": "54592" 00:17:50.515 }, 00:17:50.515 "auth": { 00:17:50.515 "state": "completed", 00:17:50.515 "digest": "sha512", 00:17:50.515 "dhgroup": "ffdhe2048" 00:17:50.515 } 00:17:50.515 } 00:17:50.515 ]' 00:17:50.515 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.773 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.773 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.773 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:50.773 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.773 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.773 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.773 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.030 13:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Nzc1ZjI1MDM1NTRmYjVjMDJlOTAwZTAxM2U1NzM2YTZkMzAxNDc5ZTRhMjRkZjcyhKo8vA==: --dhchap-ctrl-secret DHHC-1:03:MmJlYmRhNmM0YTFkOTFjOTg1ZmEwOTZmYTMxOWIzN2EzYWFjYWIwNTFmMTEwNWIxNzFiNmIyMmU3OTJiMGFiOfuiRg8=: 00:17:51.964 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.964 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:51.964 13:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.964 13:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.964 13:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.964 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.964 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:51.964 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:51.964 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:51.964 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.964 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:51.964 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:51.964 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:51.964 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.964 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.964 13:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.964 13:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.222 13:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.222 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.222 13:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.480 00:17:52.480 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.480 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.480 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.738 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.738 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.738 13:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.738 13:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.738 13:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.738 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.738 { 00:17:52.738 "cntlid": 107, 00:17:52.738 "qid": 0, 00:17:52.738 "state": "enabled", 00:17:52.738 "thread": "nvmf_tgt_poll_group_000", 00:17:52.738 "listen_address": { 00:17:52.738 "trtype": "TCP", 00:17:52.738 "adrfam": "IPv4", 00:17:52.738 "traddr": "10.0.0.2", 00:17:52.738 "trsvcid": "4420" 00:17:52.738 }, 00:17:52.738 "peer_address": { 00:17:52.738 "trtype": "TCP", 00:17:52.738 "adrfam": "IPv4", 00:17:52.738 "traddr": "10.0.0.1", 00:17:52.738 "trsvcid": "54630" 00:17:52.738 }, 00:17:52.738 "auth": { 00:17:52.738 "state": "completed", 00:17:52.738 "digest": "sha512", 00:17:52.738 "dhgroup": "ffdhe2048" 00:17:52.738 } 00:17:52.738 } 00:17:52.738 ]' 00:17:52.738 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.738 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.738 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.738 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:52.738 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.738 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.738 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.738 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.997 13:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YTI1MTEzZjM0NjFkNTkwYjUwNjc5ODlmOTNhZjc3NTFLAk7g: --dhchap-ctrl-secret DHHC-1:02:NjMxYTExZjU0NmQ1NTlhYjcwZDhkODc5NTk3ZjI1OTliZjJlZWFmNTdhNjdlYmRlkGfA3A==: 00:17:53.932 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.932 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:53.932 13:56:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.932 13:56:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.932 13:56:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.932 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.932 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:53.932 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:54.189 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:54.189 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.189 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:54.189 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:54.189 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:54.189 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.189 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.189 13:56:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.189 13:56:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.189 13:56:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.189 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.190 13:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.447 00:17:54.447 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.447 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.447 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.705 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.705 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.705 13:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.705 13:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.705 13:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.705 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.705 { 00:17:54.705 "cntlid": 109, 00:17:54.705 "qid": 0, 00:17:54.705 "state": "enabled", 00:17:54.705 "thread": "nvmf_tgt_poll_group_000", 00:17:54.705 "listen_address": { 00:17:54.705 "trtype": "TCP", 00:17:54.705 "adrfam": "IPv4", 00:17:54.705 "traddr": "10.0.0.2", 00:17:54.705 "trsvcid": "4420" 00:17:54.705 }, 00:17:54.705 "peer_address": { 00:17:54.705 "trtype": "TCP", 00:17:54.705 "adrfam": "IPv4", 00:17:54.705 "traddr": "10.0.0.1", 00:17:54.705 "trsvcid": "54662" 00:17:54.705 }, 00:17:54.705 "auth": { 00:17:54.705 "state": "completed", 00:17:54.705 "digest": "sha512", 00:17:54.705 "dhgroup": "ffdhe2048" 00:17:54.705 } 00:17:54.705 } 00:17:54.705 ]' 00:17:54.705 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.705 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.705 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.963 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:54.963 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.963 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.963 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.963 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.221 13:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OWRhMmY3NmQxYmFmZTdlZDRlOTkzODdjN2JhNDI4ZGVhMmFlN2I2MTcyNThjNDMw/g+/BQ==: --dhchap-ctrl-secret DHHC-1:01:MTI0NWE4YzEzY2UzYmU1NWVhNmFjNWUwYTU1NWI3M2TBB42v: 00:17:56.155 13:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.155 13:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:56.155 13:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.155 13:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.155 13:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.155 13:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.155 13:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:56.155 13:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:56.413 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:56.413 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.413 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:56.413 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:56.413 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:56.413 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.413 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:56.413 13:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.413 13:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.413 13:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.413 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.413 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.671 00:17:56.671 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.671 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.671 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.929 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.929 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.929 13:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.929 13:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.929 13:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.929 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.929 { 00:17:56.929 "cntlid": 111, 00:17:56.929 "qid": 0, 00:17:56.929 "state": "enabled", 00:17:56.929 "thread": "nvmf_tgt_poll_group_000", 00:17:56.929 "listen_address": { 00:17:56.929 "trtype": "TCP", 00:17:56.929 "adrfam": "IPv4", 00:17:56.929 "traddr": "10.0.0.2", 00:17:56.929 "trsvcid": "4420" 00:17:56.929 }, 00:17:56.929 "peer_address": { 00:17:56.929 "trtype": "TCP", 00:17:56.929 "adrfam": "IPv4", 00:17:56.929 "traddr": "10.0.0.1", 00:17:56.929 "trsvcid": "39466" 00:17:56.929 }, 00:17:56.929 "auth": { 00:17:56.929 "state": "completed", 00:17:56.929 "digest": "sha512", 00:17:56.929 "dhgroup": "ffdhe2048" 00:17:56.929 } 00:17:56.929 } 00:17:56.929 ]' 00:17:56.929 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.929 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.929 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.188 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:57.188 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.188 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.188 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.188 13:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.445 13:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MDlhZmU4ZTRiZTcwMzBiMGJiZTJhNWU5ZWQyMDJmNTE5NDI0ZmQ0M2NjZTkxNjNkMDJlYzg2MTcxY2I1OTA5NrD6o+E=: 00:17:58.381 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.381 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:58.381 13:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.381 13:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.381 13:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.381 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.381 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.381 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:58.381 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:58.638 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:17:58.638 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.638 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:58.638 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:58.638 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:58.638 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.638 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.638 13:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.638 13:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.638 13:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.638 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.638 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.895 00:17:58.895 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.895 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.895 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.152 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.152 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.152 13:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.152 13:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.152 13:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.152 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.152 { 00:17:59.152 "cntlid": 113, 00:17:59.152 "qid": 0, 00:17:59.152 "state": "enabled", 00:17:59.152 "thread": "nvmf_tgt_poll_group_000", 00:17:59.152 "listen_address": { 00:17:59.152 "trtype": "TCP", 00:17:59.152 "adrfam": "IPv4", 00:17:59.152 "traddr": "10.0.0.2", 00:17:59.152 "trsvcid": "4420" 00:17:59.152 }, 00:17:59.152 "peer_address": { 00:17:59.152 "trtype": "TCP", 00:17:59.152 "adrfam": "IPv4", 00:17:59.152 "traddr": "10.0.0.1", 00:17:59.152 "trsvcid": "39510" 00:17:59.152 }, 00:17:59.152 "auth": { 00:17:59.152 "state": "completed", 00:17:59.152 "digest": "sha512", 00:17:59.152 "dhgroup": "ffdhe3072" 00:17:59.152 } 00:17:59.152 } 00:17:59.152 ]' 00:17:59.152 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.152 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.152 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.152 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:59.152 13:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.409 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.409 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.409 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.668 13:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Nzc1ZjI1MDM1NTRmYjVjMDJlOTAwZTAxM2U1NzM2YTZkMzAxNDc5ZTRhMjRkZjcyhKo8vA==: --dhchap-ctrl-secret DHHC-1:03:MmJlYmRhNmM0YTFkOTFjOTg1ZmEwOTZmYTMxOWIzN2EzYWFjYWIwNTFmMTEwNWIxNzFiNmIyMmU3OTJiMGFiOfuiRg8=: 00:18:00.599 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.599 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:00.599 13:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.600 13:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.600 13:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.600 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.600 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:00.600 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:00.857 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:18:00.857 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.857 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:00.857 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:00.857 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:00.857 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.857 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.857 13:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.857 13:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.857 13:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.857 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.857 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.114 00:18:01.114 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.114 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.114 13:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.371 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.371 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.371 13:56:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.371 13:56:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.371 13:56:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.371 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.371 { 00:18:01.371 "cntlid": 115, 00:18:01.371 "qid": 0, 00:18:01.371 "state": "enabled", 00:18:01.371 "thread": "nvmf_tgt_poll_group_000", 00:18:01.371 "listen_address": { 00:18:01.371 "trtype": "TCP", 00:18:01.371 "adrfam": "IPv4", 00:18:01.371 "traddr": "10.0.0.2", 00:18:01.371 "trsvcid": "4420" 00:18:01.371 }, 00:18:01.371 "peer_address": { 00:18:01.371 "trtype": "TCP", 00:18:01.371 "adrfam": "IPv4", 00:18:01.371 "traddr": "10.0.0.1", 00:18:01.371 "trsvcid": "39540" 00:18:01.371 }, 00:18:01.371 "auth": { 00:18:01.371 "state": "completed", 00:18:01.371 "digest": "sha512", 00:18:01.371 "dhgroup": "ffdhe3072" 00:18:01.371 } 00:18:01.371 } 00:18:01.371 ]' 00:18:01.371 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.371 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.371 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.371 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:01.371 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.371 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.371 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.371 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.629 13:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YTI1MTEzZjM0NjFkNTkwYjUwNjc5ODlmOTNhZjc3NTFLAk7g: --dhchap-ctrl-secret DHHC-1:02:NjMxYTExZjU0NmQ1NTlhYjcwZDhkODc5NTk3ZjI1OTliZjJlZWFmNTdhNjdlYmRlkGfA3A==: 00:18:02.565 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.565 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:02.565 13:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.565 13:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.565 13:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.565 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.565 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:02.565 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:03.148 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:18:03.148 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.148 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:03.148 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:03.148 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:03.148 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.148 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.148 13:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.148 13:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.148 13:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.148 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.148 13:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.407 00:18:03.407 13:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.407 13:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.407 13:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.666 13:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.666 13:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.666 13:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.666 13:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.666 13:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.666 13:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.666 { 00:18:03.666 "cntlid": 117, 00:18:03.666 "qid": 0, 00:18:03.666 "state": "enabled", 00:18:03.666 "thread": "nvmf_tgt_poll_group_000", 00:18:03.666 "listen_address": { 00:18:03.666 "trtype": "TCP", 00:18:03.666 "adrfam": "IPv4", 00:18:03.666 "traddr": "10.0.0.2", 00:18:03.666 "trsvcid": "4420" 00:18:03.666 }, 00:18:03.666 "peer_address": { 00:18:03.666 "trtype": "TCP", 00:18:03.666 "adrfam": "IPv4", 00:18:03.666 "traddr": "10.0.0.1", 00:18:03.666 "trsvcid": "39582" 00:18:03.666 }, 00:18:03.666 "auth": { 00:18:03.666 "state": "completed", 00:18:03.666 "digest": "sha512", 00:18:03.666 "dhgroup": "ffdhe3072" 00:18:03.666 } 00:18:03.666 } 00:18:03.666 ]' 00:18:03.666 13:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.666 13:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.666 13:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.666 13:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:03.666 13:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.666 13:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.666 13:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.666 13:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.925 13:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OWRhMmY3NmQxYmFmZTdlZDRlOTkzODdjN2JhNDI4ZGVhMmFlN2I2MTcyNThjNDMw/g+/BQ==: --dhchap-ctrl-secret DHHC-1:01:MTI0NWE4YzEzY2UzYmU1NWVhNmFjNWUwYTU1NWI3M2TBB42v: 00:18:04.863 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.863 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:04.863 13:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.863 13:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.863 13:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.863 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.863 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:04.863 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:05.121 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:18:05.121 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.121 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:05.121 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:05.121 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:05.121 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.121 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:05.121 13:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.121 13:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.121 13:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.121 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.121 13:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.379 00:18:05.379 13:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.379 13:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.379 13:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.637 13:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.637 13:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.637 13:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.637 13:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.637 13:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.637 13:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.637 { 00:18:05.637 "cntlid": 119, 00:18:05.637 "qid": 0, 00:18:05.637 "state": "enabled", 00:18:05.637 "thread": "nvmf_tgt_poll_group_000", 00:18:05.637 "listen_address": { 00:18:05.637 "trtype": "TCP", 00:18:05.637 "adrfam": "IPv4", 00:18:05.637 "traddr": "10.0.0.2", 00:18:05.637 "trsvcid": "4420" 00:18:05.637 }, 00:18:05.637 "peer_address": { 00:18:05.637 "trtype": "TCP", 00:18:05.637 "adrfam": "IPv4", 00:18:05.637 "traddr": "10.0.0.1", 00:18:05.637 "trsvcid": "39614" 00:18:05.637 }, 00:18:05.637 "auth": { 00:18:05.637 "state": "completed", 00:18:05.637 "digest": "sha512", 00:18:05.637 "dhgroup": "ffdhe3072" 00:18:05.637 } 00:18:05.637 } 00:18:05.637 ]' 00:18:05.637 13:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.896 13:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.896 13:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.896 13:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:05.896 13:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.896 13:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.896 13:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.896 13:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.153 13:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MDlhZmU4ZTRiZTcwMzBiMGJiZTJhNWU5ZWQyMDJmNTE5NDI0ZmQ0M2NjZTkxNjNkMDJlYzg2MTcxY2I1OTA5NrD6o+E=: 00:18:07.115 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.115 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:07.115 13:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.115 13:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.115 13:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.115 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:07.115 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.115 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:07.115 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:07.372 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:18:07.372 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.372 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:07.372 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:07.372 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:07.372 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.372 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.372 13:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.372 13:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.372 13:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.372 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.372 13:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.630 00:18:07.630 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.630 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.630 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.887 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.887 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.887 13:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.887 13:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.887 13:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.887 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.887 { 00:18:07.887 "cntlid": 121, 00:18:07.887 "qid": 0, 00:18:07.887 "state": "enabled", 00:18:07.887 "thread": "nvmf_tgt_poll_group_000", 00:18:07.887 "listen_address": { 00:18:07.887 "trtype": "TCP", 00:18:07.887 "adrfam": "IPv4", 00:18:07.887 "traddr": "10.0.0.2", 00:18:07.887 "trsvcid": "4420" 00:18:07.887 }, 00:18:07.887 "peer_address": { 00:18:07.887 "trtype": "TCP", 00:18:07.887 "adrfam": "IPv4", 00:18:07.887 "traddr": "10.0.0.1", 00:18:07.887 "trsvcid": "48460" 00:18:07.887 }, 00:18:07.887 "auth": { 00:18:07.887 "state": "completed", 00:18:07.887 "digest": "sha512", 00:18:07.887 "dhgroup": "ffdhe4096" 00:18:07.887 } 00:18:07.887 } 00:18:07.887 ]' 00:18:07.887 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.887 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.887 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.887 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:07.887 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.145 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.145 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.145 13:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.402 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Nzc1ZjI1MDM1NTRmYjVjMDJlOTAwZTAxM2U1NzM2YTZkMzAxNDc5ZTRhMjRkZjcyhKo8vA==: --dhchap-ctrl-secret DHHC-1:03:MmJlYmRhNmM0YTFkOTFjOTg1ZmEwOTZmYTMxOWIzN2EzYWFjYWIwNTFmMTEwNWIxNzFiNmIyMmU3OTJiMGFiOfuiRg8=: 00:18:09.334 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.334 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:09.334 13:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.334 13:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.334 13:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.334 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.334 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:09.334 13:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:09.334 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:18:09.334 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.334 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:09.334 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:09.334 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:09.334 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.334 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.334 13:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.334 13:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.334 13:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.334 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.334 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.899 00:18:09.899 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.899 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.899 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.156 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.156 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.156 13:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.156 13:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.156 13:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.156 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.156 { 00:18:10.156 "cntlid": 123, 00:18:10.156 "qid": 0, 00:18:10.156 "state": "enabled", 00:18:10.156 "thread": "nvmf_tgt_poll_group_000", 00:18:10.156 "listen_address": { 00:18:10.156 "trtype": "TCP", 00:18:10.156 "adrfam": "IPv4", 00:18:10.156 "traddr": "10.0.0.2", 00:18:10.156 "trsvcid": "4420" 00:18:10.156 }, 00:18:10.156 "peer_address": { 00:18:10.156 "trtype": "TCP", 00:18:10.156 "adrfam": "IPv4", 00:18:10.156 "traddr": "10.0.0.1", 00:18:10.156 "trsvcid": "48490" 00:18:10.156 }, 00:18:10.156 "auth": { 00:18:10.156 "state": "completed", 00:18:10.156 "digest": "sha512", 00:18:10.156 "dhgroup": "ffdhe4096" 00:18:10.156 } 00:18:10.156 } 00:18:10.156 ]' 00:18:10.156 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.156 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.156 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.156 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:10.156 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.156 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.156 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.156 13:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.412 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YTI1MTEzZjM0NjFkNTkwYjUwNjc5ODlmOTNhZjc3NTFLAk7g: --dhchap-ctrl-secret DHHC-1:02:NjMxYTExZjU0NmQ1NTlhYjcwZDhkODc5NTk3ZjI1OTliZjJlZWFmNTdhNjdlYmRlkGfA3A==: 00:18:11.343 13:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.343 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:11.343 13:57:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.343 13:57:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.343 13:57:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.343 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.343 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:11.343 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:11.600 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:11.600 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.600 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:11.600 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:11.600 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:11.600 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.600 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.600 13:57:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.600 13:57:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.600 13:57:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.600 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.600 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.857 00:18:11.858 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.858 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.858 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.115 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.115 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.115 13:57:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.115 13:57:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.115 13:57:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.115 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.115 { 00:18:12.115 "cntlid": 125, 00:18:12.115 "qid": 0, 00:18:12.115 "state": "enabled", 00:18:12.115 "thread": "nvmf_tgt_poll_group_000", 00:18:12.115 "listen_address": { 00:18:12.115 "trtype": "TCP", 00:18:12.115 "adrfam": "IPv4", 00:18:12.115 "traddr": "10.0.0.2", 00:18:12.115 "trsvcid": "4420" 00:18:12.115 }, 00:18:12.115 "peer_address": { 00:18:12.115 "trtype": "TCP", 00:18:12.115 "adrfam": "IPv4", 00:18:12.115 "traddr": "10.0.0.1", 00:18:12.115 "trsvcid": "48528" 00:18:12.115 }, 00:18:12.115 "auth": { 00:18:12.115 "state": "completed", 00:18:12.115 "digest": "sha512", 00:18:12.115 "dhgroup": "ffdhe4096" 00:18:12.115 } 00:18:12.115 } 00:18:12.115 ]' 00:18:12.115 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.373 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:12.373 13:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.373 13:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:12.373 13:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.373 13:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.373 13:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.373 13:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.630 13:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OWRhMmY3NmQxYmFmZTdlZDRlOTkzODdjN2JhNDI4ZGVhMmFlN2I2MTcyNThjNDMw/g+/BQ==: --dhchap-ctrl-secret DHHC-1:01:MTI0NWE4YzEzY2UzYmU1NWVhNmFjNWUwYTU1NWI3M2TBB42v: 00:18:13.561 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.561 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:13.561 13:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.561 13:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.561 13:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.561 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.561 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:13.561 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:13.818 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:13.818 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.818 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:13.818 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:13.818 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:13.818 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.818 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:13.818 13:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.818 13:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.818 13:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.818 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.818 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:14.075 00:18:14.075 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.075 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.075 13:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.360 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.360 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.360 13:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.360 13:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.360 13:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.360 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.360 { 00:18:14.360 "cntlid": 127, 00:18:14.360 "qid": 0, 00:18:14.360 "state": "enabled", 00:18:14.360 "thread": "nvmf_tgt_poll_group_000", 00:18:14.360 "listen_address": { 00:18:14.360 "trtype": "TCP", 00:18:14.360 "adrfam": "IPv4", 00:18:14.360 "traddr": "10.0.0.2", 00:18:14.360 "trsvcid": "4420" 00:18:14.360 }, 00:18:14.360 "peer_address": { 00:18:14.360 "trtype": "TCP", 00:18:14.360 "adrfam": "IPv4", 00:18:14.360 "traddr": "10.0.0.1", 00:18:14.360 "trsvcid": "48566" 00:18:14.360 }, 00:18:14.360 "auth": { 00:18:14.360 "state": "completed", 00:18:14.360 "digest": "sha512", 00:18:14.360 "dhgroup": "ffdhe4096" 00:18:14.360 } 00:18:14.360 } 00:18:14.360 ]' 00:18:14.360 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.360 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.360 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.360 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:14.360 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.617 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.617 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.617 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.874 13:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MDlhZmU4ZTRiZTcwMzBiMGJiZTJhNWU5ZWQyMDJmNTE5NDI0ZmQ0M2NjZTkxNjNkMDJlYzg2MTcxY2I1OTA5NrD6o+E=: 00:18:15.806 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.806 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:15.806 13:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.806 13:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.806 13:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.806 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.806 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.806 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:15.806 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:15.806 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:15.806 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.806 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:15.806 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:15.806 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:15.806 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.806 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.806 13:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.806 13:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.806 13:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.806 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.807 13:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.372 00:18:16.372 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.372 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.372 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.629 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.629 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.629 13:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.629 13:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.629 13:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.629 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.629 { 00:18:16.629 "cntlid": 129, 00:18:16.629 "qid": 0, 00:18:16.629 "state": "enabled", 00:18:16.629 "thread": "nvmf_tgt_poll_group_000", 00:18:16.629 "listen_address": { 00:18:16.629 "trtype": "TCP", 00:18:16.629 "adrfam": "IPv4", 00:18:16.629 "traddr": "10.0.0.2", 00:18:16.629 "trsvcid": "4420" 00:18:16.629 }, 00:18:16.629 "peer_address": { 00:18:16.629 "trtype": "TCP", 00:18:16.629 "adrfam": "IPv4", 00:18:16.629 "traddr": "10.0.0.1", 00:18:16.629 "trsvcid": "36396" 00:18:16.629 }, 00:18:16.629 "auth": { 00:18:16.629 "state": "completed", 00:18:16.629 "digest": "sha512", 00:18:16.629 "dhgroup": "ffdhe6144" 00:18:16.629 } 00:18:16.629 } 00:18:16.629 ]' 00:18:16.629 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.629 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.629 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.629 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:16.629 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.888 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.888 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.888 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.148 13:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Nzc1ZjI1MDM1NTRmYjVjMDJlOTAwZTAxM2U1NzM2YTZkMzAxNDc5ZTRhMjRkZjcyhKo8vA==: --dhchap-ctrl-secret DHHC-1:03:MmJlYmRhNmM0YTFkOTFjOTg1ZmEwOTZmYTMxOWIzN2EzYWFjYWIwNTFmMTEwNWIxNzFiNmIyMmU3OTJiMGFiOfuiRg8=: 00:18:18.084 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.084 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:18.084 13:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.084 13:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.084 13:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.084 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.084 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:18.084 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:18.084 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:18.084 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.084 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:18.084 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:18.084 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:18.084 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.084 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.084 13:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.084 13:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.084 13:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.084 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.084 13:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.651 00:18:18.651 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.651 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.651 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.909 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.909 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.909 13:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.909 13:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.909 13:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.909 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.909 { 00:18:18.909 "cntlid": 131, 00:18:18.909 "qid": 0, 00:18:18.909 "state": "enabled", 00:18:18.909 "thread": "nvmf_tgt_poll_group_000", 00:18:18.909 "listen_address": { 00:18:18.909 "trtype": "TCP", 00:18:18.909 "adrfam": "IPv4", 00:18:18.909 "traddr": "10.0.0.2", 00:18:18.909 "trsvcid": "4420" 00:18:18.909 }, 00:18:18.909 "peer_address": { 00:18:18.909 "trtype": "TCP", 00:18:18.909 "adrfam": "IPv4", 00:18:18.909 "traddr": "10.0.0.1", 00:18:18.909 "trsvcid": "36422" 00:18:18.909 }, 00:18:18.909 "auth": { 00:18:18.909 "state": "completed", 00:18:18.909 "digest": "sha512", 00:18:18.909 "dhgroup": "ffdhe6144" 00:18:18.909 } 00:18:18.909 } 00:18:18.909 ]' 00:18:18.909 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.909 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.909 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.909 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:18.909 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.909 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.909 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.909 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.168 13:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YTI1MTEzZjM0NjFkNTkwYjUwNjc5ODlmOTNhZjc3NTFLAk7g: --dhchap-ctrl-secret DHHC-1:02:NjMxYTExZjU0NmQ1NTlhYjcwZDhkODc5NTk3ZjI1OTliZjJlZWFmNTdhNjdlYmRlkGfA3A==: 00:18:20.100 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.101 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:20.101 13:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.101 13:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.101 13:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.101 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.101 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:20.101 13:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:20.359 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:20.359 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.359 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:20.359 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:20.359 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:20.359 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.359 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.359 13:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.359 13:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.359 13:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.359 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.359 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.928 00:18:20.928 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.928 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.928 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.186 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.186 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.186 13:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.186 13:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.186 13:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.186 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.186 { 00:18:21.186 "cntlid": 133, 00:18:21.186 "qid": 0, 00:18:21.186 "state": "enabled", 00:18:21.186 "thread": "nvmf_tgt_poll_group_000", 00:18:21.186 "listen_address": { 00:18:21.186 "trtype": "TCP", 00:18:21.186 "adrfam": "IPv4", 00:18:21.186 "traddr": "10.0.0.2", 00:18:21.186 "trsvcid": "4420" 00:18:21.186 }, 00:18:21.186 "peer_address": { 00:18:21.186 "trtype": "TCP", 00:18:21.186 "adrfam": "IPv4", 00:18:21.186 "traddr": "10.0.0.1", 00:18:21.186 "trsvcid": "36434" 00:18:21.186 }, 00:18:21.186 "auth": { 00:18:21.186 "state": "completed", 00:18:21.186 "digest": "sha512", 00:18:21.186 "dhgroup": "ffdhe6144" 00:18:21.186 } 00:18:21.186 } 00:18:21.186 ]' 00:18:21.186 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.186 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.186 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.186 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:21.186 13:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.186 13:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.186 13:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.186 13:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.444 13:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OWRhMmY3NmQxYmFmZTdlZDRlOTkzODdjN2JhNDI4ZGVhMmFlN2I2MTcyNThjNDMw/g+/BQ==: --dhchap-ctrl-secret DHHC-1:01:MTI0NWE4YzEzY2UzYmU1NWVhNmFjNWUwYTU1NWI3M2TBB42v: 00:18:22.378 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.378 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:22.378 13:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.378 13:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.378 13:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.378 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.378 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:22.378 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:22.635 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:22.635 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.635 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:22.635 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:22.635 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:22.635 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.635 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:22.635 13:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.635 13:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.635 13:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.635 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.635 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.202 00:18:23.203 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.203 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.203 13:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.460 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.460 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.460 13:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.460 13:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.460 13:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.460 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.460 { 00:18:23.460 "cntlid": 135, 00:18:23.460 "qid": 0, 00:18:23.460 "state": "enabled", 00:18:23.460 "thread": "nvmf_tgt_poll_group_000", 00:18:23.460 "listen_address": { 00:18:23.460 "trtype": "TCP", 00:18:23.460 "adrfam": "IPv4", 00:18:23.460 "traddr": "10.0.0.2", 00:18:23.460 "trsvcid": "4420" 00:18:23.460 }, 00:18:23.460 "peer_address": { 00:18:23.460 "trtype": "TCP", 00:18:23.460 "adrfam": "IPv4", 00:18:23.460 "traddr": "10.0.0.1", 00:18:23.460 "trsvcid": "36466" 00:18:23.460 }, 00:18:23.460 "auth": { 00:18:23.460 "state": "completed", 00:18:23.460 "digest": "sha512", 00:18:23.460 "dhgroup": "ffdhe6144" 00:18:23.460 } 00:18:23.460 } 00:18:23.460 ]' 00:18:23.460 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.460 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.460 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.460 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:23.460 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.718 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.718 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.718 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.976 13:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MDlhZmU4ZTRiZTcwMzBiMGJiZTJhNWU5ZWQyMDJmNTE5NDI0ZmQ0M2NjZTkxNjNkMDJlYzg2MTcxY2I1OTA5NrD6o+E=: 00:18:24.910 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.911 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:24.911 13:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.911 13:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.911 13:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.911 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:24.911 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.911 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:24.911 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:24.911 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:24.911 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.911 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:24.911 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:24.911 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:24.911 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.911 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.911 13:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.911 13:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.911 13:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.911 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.911 13:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.848 00:18:25.848 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.848 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.848 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.105 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.105 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.105 13:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.105 13:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.105 13:57:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.105 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.105 { 00:18:26.105 "cntlid": 137, 00:18:26.105 "qid": 0, 00:18:26.105 "state": "enabled", 00:18:26.105 "thread": "nvmf_tgt_poll_group_000", 00:18:26.105 "listen_address": { 00:18:26.105 "trtype": "TCP", 00:18:26.105 "adrfam": "IPv4", 00:18:26.105 "traddr": "10.0.0.2", 00:18:26.105 "trsvcid": "4420" 00:18:26.105 }, 00:18:26.105 "peer_address": { 00:18:26.105 "trtype": "TCP", 00:18:26.105 "adrfam": "IPv4", 00:18:26.105 "traddr": "10.0.0.1", 00:18:26.105 "trsvcid": "36498" 00:18:26.105 }, 00:18:26.105 "auth": { 00:18:26.105 "state": "completed", 00:18:26.105 "digest": "sha512", 00:18:26.105 "dhgroup": "ffdhe8192" 00:18:26.105 } 00:18:26.105 } 00:18:26.105 ]' 00:18:26.105 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.105 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.105 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.105 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:26.105 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.105 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.105 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.105 13:57:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.363 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Nzc1ZjI1MDM1NTRmYjVjMDJlOTAwZTAxM2U1NzM2YTZkMzAxNDc5ZTRhMjRkZjcyhKo8vA==: --dhchap-ctrl-secret DHHC-1:03:MmJlYmRhNmM0YTFkOTFjOTg1ZmEwOTZmYTMxOWIzN2EzYWFjYWIwNTFmMTEwNWIxNzFiNmIyMmU3OTJiMGFiOfuiRg8=: 00:18:27.300 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.300 13:57:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:27.300 13:57:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.300 13:57:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.300 13:57:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.300 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.300 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:27.300 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:27.557 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:27.557 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.557 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:27.557 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:27.557 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:27.557 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.557 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.557 13:57:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.557 13:57:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.557 13:57:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.557 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.557 13:57:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.499 00:18:28.499 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.499 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.499 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.499 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.499 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.499 13:57:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.499 13:57:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.756 13:57:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.756 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.756 { 00:18:28.756 "cntlid": 139, 00:18:28.756 "qid": 0, 00:18:28.756 "state": "enabled", 00:18:28.756 "thread": "nvmf_tgt_poll_group_000", 00:18:28.756 "listen_address": { 00:18:28.756 "trtype": "TCP", 00:18:28.756 "adrfam": "IPv4", 00:18:28.756 "traddr": "10.0.0.2", 00:18:28.756 "trsvcid": "4420" 00:18:28.756 }, 00:18:28.756 "peer_address": { 00:18:28.756 "trtype": "TCP", 00:18:28.756 "adrfam": "IPv4", 00:18:28.756 "traddr": "10.0.0.1", 00:18:28.756 "trsvcid": "53830" 00:18:28.756 }, 00:18:28.756 "auth": { 00:18:28.756 "state": "completed", 00:18:28.756 "digest": "sha512", 00:18:28.756 "dhgroup": "ffdhe8192" 00:18:28.756 } 00:18:28.756 } 00:18:28.756 ]' 00:18:28.756 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.756 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.756 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.756 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:28.756 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.756 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.756 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.756 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.012 13:57:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:YTI1MTEzZjM0NjFkNTkwYjUwNjc5ODlmOTNhZjc3NTFLAk7g: --dhchap-ctrl-secret DHHC-1:02:NjMxYTExZjU0NmQ1NTlhYjcwZDhkODc5NTk3ZjI1OTliZjJlZWFmNTdhNjdlYmRlkGfA3A==: 00:18:29.946 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.946 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:29.946 13:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.946 13:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.946 13:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.946 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.946 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:29.946 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:30.203 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:30.203 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.203 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:30.203 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:30.203 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:30.203 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.203 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.203 13:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.203 13:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.203 13:57:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.203 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.203 13:57:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.140 00:18:31.140 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.140 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.140 13:57:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.443 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.443 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.443 13:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.443 13:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.443 13:57:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.443 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.443 { 00:18:31.443 "cntlid": 141, 00:18:31.443 "qid": 0, 00:18:31.443 "state": "enabled", 00:18:31.443 "thread": "nvmf_tgt_poll_group_000", 00:18:31.443 "listen_address": { 00:18:31.443 "trtype": "TCP", 00:18:31.443 "adrfam": "IPv4", 00:18:31.443 "traddr": "10.0.0.2", 00:18:31.443 "trsvcid": "4420" 00:18:31.443 }, 00:18:31.443 "peer_address": { 00:18:31.443 "trtype": "TCP", 00:18:31.443 "adrfam": "IPv4", 00:18:31.443 "traddr": "10.0.0.1", 00:18:31.443 "trsvcid": "53854" 00:18:31.443 }, 00:18:31.443 "auth": { 00:18:31.443 "state": "completed", 00:18:31.443 "digest": "sha512", 00:18:31.443 "dhgroup": "ffdhe8192" 00:18:31.443 } 00:18:31.443 } 00:18:31.443 ]' 00:18:31.443 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.443 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:31.443 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.443 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:31.443 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.443 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.443 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.443 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.725 13:57:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:OWRhMmY3NmQxYmFmZTdlZDRlOTkzODdjN2JhNDI4ZGVhMmFlN2I2MTcyNThjNDMw/g+/BQ==: --dhchap-ctrl-secret DHHC-1:01:MTI0NWE4YzEzY2UzYmU1NWVhNmFjNWUwYTU1NWI3M2TBB42v: 00:18:32.657 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.657 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:32.657 13:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.657 13:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.657 13:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.657 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.657 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:32.657 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:32.915 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:32.915 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.915 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:32.915 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:32.915 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:32.915 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.915 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:32.915 13:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.915 13:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.915 13:57:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.915 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:32.915 13:57:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.848 00:18:33.848 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.848 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.848 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.848 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.848 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.848 13:57:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.848 13:57:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.848 13:57:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.848 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.848 { 00:18:33.848 "cntlid": 143, 00:18:33.848 "qid": 0, 00:18:33.848 "state": "enabled", 00:18:33.848 "thread": "nvmf_tgt_poll_group_000", 00:18:33.848 "listen_address": { 00:18:33.848 "trtype": "TCP", 00:18:33.848 "adrfam": "IPv4", 00:18:33.848 "traddr": "10.0.0.2", 00:18:33.848 "trsvcid": "4420" 00:18:33.848 }, 00:18:33.848 "peer_address": { 00:18:33.848 "trtype": "TCP", 00:18:33.848 "adrfam": "IPv4", 00:18:33.848 "traddr": "10.0.0.1", 00:18:33.848 "trsvcid": "53892" 00:18:33.848 }, 00:18:33.848 "auth": { 00:18:33.848 "state": "completed", 00:18:33.848 "digest": "sha512", 00:18:33.848 "dhgroup": "ffdhe8192" 00:18:33.848 } 00:18:33.848 } 00:18:33.848 ]' 00:18:33.848 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.848 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:33.848 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.106 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:34.106 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.106 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.106 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.106 13:57:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.363 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MDlhZmU4ZTRiZTcwMzBiMGJiZTJhNWU5ZWQyMDJmNTE5NDI0ZmQ0M2NjZTkxNjNkMDJlYzg2MTcxY2I1OTA5NrD6o+E=: 00:18:35.300 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.300 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:35.300 13:57:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.300 13:57:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.300 13:57:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.300 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:35.300 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:35.300 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:35.300 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:35.300 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:35.300 13:57:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:35.558 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:35.558 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.558 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:35.558 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:35.558 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:35.558 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.558 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.558 13:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.558 13:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.558 13:57:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.558 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.558 13:57:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.496 00:18:36.496 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.496 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.496 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.754 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.754 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.754 13:57:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.754 13:57:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.754 13:57:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.754 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.754 { 00:18:36.754 "cntlid": 145, 00:18:36.754 "qid": 0, 00:18:36.754 "state": "enabled", 00:18:36.754 "thread": "nvmf_tgt_poll_group_000", 00:18:36.754 "listen_address": { 00:18:36.754 "trtype": "TCP", 00:18:36.754 "adrfam": "IPv4", 00:18:36.754 "traddr": "10.0.0.2", 00:18:36.754 "trsvcid": "4420" 00:18:36.754 }, 00:18:36.754 "peer_address": { 00:18:36.754 "trtype": "TCP", 00:18:36.754 "adrfam": "IPv4", 00:18:36.754 "traddr": "10.0.0.1", 00:18:36.754 "trsvcid": "59832" 00:18:36.754 }, 00:18:36.754 "auth": { 00:18:36.754 "state": "completed", 00:18:36.754 "digest": "sha512", 00:18:36.754 "dhgroup": "ffdhe8192" 00:18:36.754 } 00:18:36.754 } 00:18:36.754 ]' 00:18:36.754 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.754 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:36.754 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.754 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:36.754 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.754 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.754 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.754 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.011 13:57:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:Nzc1ZjI1MDM1NTRmYjVjMDJlOTAwZTAxM2U1NzM2YTZkMzAxNDc5ZTRhMjRkZjcyhKo8vA==: --dhchap-ctrl-secret DHHC-1:03:MmJlYmRhNmM0YTFkOTFjOTg1ZmEwOTZmYTMxOWIzN2EzYWFjYWIwNTFmMTEwNWIxNzFiNmIyMmU3OTJiMGFiOfuiRg8=: 00:18:37.946 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.946 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:37.946 13:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.946 13:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.946 13:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.946 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:18:37.946 13:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.946 13:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.946 13:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.946 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:37.946 13:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:37.946 13:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:37.946 13:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:37.946 13:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:37.946 13:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:37.946 13:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:37.946 13:57:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:37.946 13:57:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:38.883 request: 00:18:38.883 { 00:18:38.883 "name": "nvme0", 00:18:38.883 "trtype": "tcp", 00:18:38.883 "traddr": "10.0.0.2", 00:18:38.883 "adrfam": "ipv4", 00:18:38.883 "trsvcid": "4420", 00:18:38.883 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:38.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:38.883 "prchk_reftag": false, 00:18:38.883 "prchk_guard": false, 00:18:38.883 "hdgst": false, 00:18:38.883 "ddgst": false, 00:18:38.883 "dhchap_key": "key2", 00:18:38.883 "method": "bdev_nvme_attach_controller", 00:18:38.883 "req_id": 1 00:18:38.883 } 00:18:38.883 Got JSON-RPC error response 00:18:38.883 response: 00:18:38.883 { 00:18:38.883 "code": -5, 00:18:38.883 "message": "Input/output error" 00:18:38.883 } 00:18:38.883 13:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:38.883 13:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:38.883 13:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:38.883 13:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:38.883 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:38.883 13:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.883 13:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.883 13:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.883 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.883 13:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.883 13:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.883 13:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.883 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:38.883 13:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:38.883 13:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:38.883 13:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:38.883 13:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:38.883 13:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:38.883 13:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:38.883 13:57:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:38.883 13:57:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:39.450 request: 00:18:39.450 { 00:18:39.450 "name": "nvme0", 00:18:39.450 "trtype": "tcp", 00:18:39.450 "traddr": "10.0.0.2", 00:18:39.450 "adrfam": "ipv4", 00:18:39.450 "trsvcid": "4420", 00:18:39.450 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:39.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:39.450 "prchk_reftag": false, 00:18:39.450 "prchk_guard": false, 00:18:39.450 "hdgst": false, 00:18:39.450 "ddgst": false, 00:18:39.450 "dhchap_key": "key1", 00:18:39.450 "dhchap_ctrlr_key": "ckey2", 00:18:39.450 "method": "bdev_nvme_attach_controller", 00:18:39.450 "req_id": 1 00:18:39.450 } 00:18:39.450 Got JSON-RPC error response 00:18:39.450 response: 00:18:39.450 { 00:18:39.450 "code": -5, 00:18:39.450 "message": "Input/output error" 00:18:39.450 } 00:18:39.450 13:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:39.450 13:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:39.450 13:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:39.450 13:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:39.450 13:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:39.450 13:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.450 13:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.450 13:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.450 13:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:18:39.450 13:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.450 13:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.450 13:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.450 13:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.450 13:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:39.450 13:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.450 13:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:39.450 13:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:39.450 13:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:39.450 13:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:39.450 13:57:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.450 13:57:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.384 request: 00:18:40.384 { 00:18:40.384 "name": "nvme0", 00:18:40.384 "trtype": "tcp", 00:18:40.384 "traddr": "10.0.0.2", 00:18:40.384 "adrfam": "ipv4", 00:18:40.384 "trsvcid": "4420", 00:18:40.384 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:40.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:40.384 "prchk_reftag": false, 00:18:40.384 "prchk_guard": false, 00:18:40.384 "hdgst": false, 00:18:40.384 "ddgst": false, 00:18:40.384 "dhchap_key": "key1", 00:18:40.384 "dhchap_ctrlr_key": "ckey1", 00:18:40.384 "method": "bdev_nvme_attach_controller", 00:18:40.384 "req_id": 1 00:18:40.384 } 00:18:40.384 Got JSON-RPC error response 00:18:40.384 response: 00:18:40.385 { 00:18:40.385 "code": -5, 00:18:40.385 "message": "Input/output error" 00:18:40.385 } 00:18:40.385 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:40.385 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:40.385 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:40.385 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:40.385 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:40.385 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.385 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.385 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.385 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3747258 00:18:40.385 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3747258 ']' 00:18:40.385 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3747258 00:18:40.385 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:40.385 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:40.385 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3747258 00:18:40.385 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:40.385 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:40.385 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3747258' 00:18:40.385 killing process with pid 3747258 00:18:40.385 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3747258 00:18:40.385 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3747258 00:18:40.642 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:40.642 13:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:40.642 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:40.642 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.642 13:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:40.642 13:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3769212 00:18:40.642 13:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3769212 00:18:40.642 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3769212 ']' 00:18:40.642 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.642 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:40.642 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.642 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:40.642 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.900 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:40.900 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:40.900 13:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:40.900 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:40.900 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.900 13:57:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.900 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:40.900 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3769212 00:18:40.900 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3769212 ']' 00:18:40.900 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.900 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:40.900 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.900 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:40.900 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.159 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:41.159 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:41.159 13:57:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:41.159 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.159 13:57:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.417 13:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.417 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:41.417 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.417 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:41.417 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:41.417 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:41.417 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.417 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:41.417 13:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.417 13:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.417 13:57:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.417 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:41.417 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.348 00:18:42.348 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.348 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.348 13:57:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.348 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.348 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.348 13:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.348 13:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.348 13:57:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.348 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.348 { 00:18:42.348 "cntlid": 1, 00:18:42.348 "qid": 0, 00:18:42.348 "state": "enabled", 00:18:42.348 "thread": "nvmf_tgt_poll_group_000", 00:18:42.348 "listen_address": { 00:18:42.348 "trtype": "TCP", 00:18:42.348 "adrfam": "IPv4", 00:18:42.348 "traddr": "10.0.0.2", 00:18:42.348 "trsvcid": "4420" 00:18:42.348 }, 00:18:42.348 "peer_address": { 00:18:42.348 "trtype": "TCP", 00:18:42.348 "adrfam": "IPv4", 00:18:42.348 "traddr": "10.0.0.1", 00:18:42.348 "trsvcid": "59888" 00:18:42.348 }, 00:18:42.348 "auth": { 00:18:42.348 "state": "completed", 00:18:42.348 "digest": "sha512", 00:18:42.348 "dhgroup": "ffdhe8192" 00:18:42.348 } 00:18:42.348 } 00:18:42.348 ]' 00:18:42.348 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.604 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:42.604 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.604 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:42.604 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.604 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.604 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.604 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.861 13:57:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MDlhZmU4ZTRiZTcwMzBiMGJiZTJhNWU5ZWQyMDJmNTE5NDI0ZmQ0M2NjZTkxNjNkMDJlYzg2MTcxY2I1OTA5NrD6o+E=: 00:18:43.792 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.792 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:43.792 13:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.792 13:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.792 13:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.792 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:43.792 13:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.792 13:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.792 13:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.792 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:43.792 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:44.050 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:44.050 13:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:44.050 13:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:44.050 13:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:44.050 13:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:44.050 13:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:44.050 13:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:44.050 13:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:44.050 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:44.308 request: 00:18:44.308 { 00:18:44.308 "name": "nvme0", 00:18:44.308 "trtype": "tcp", 00:18:44.308 "traddr": "10.0.0.2", 00:18:44.308 "adrfam": "ipv4", 00:18:44.308 "trsvcid": "4420", 00:18:44.308 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:44.308 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:44.308 "prchk_reftag": false, 00:18:44.308 "prchk_guard": false, 00:18:44.308 "hdgst": false, 00:18:44.308 "ddgst": false, 00:18:44.309 "dhchap_key": "key3", 00:18:44.309 "method": "bdev_nvme_attach_controller", 00:18:44.309 "req_id": 1 00:18:44.309 } 00:18:44.309 Got JSON-RPC error response 00:18:44.309 response: 00:18:44.309 { 00:18:44.309 "code": -5, 00:18:44.309 "message": "Input/output error" 00:18:44.309 } 00:18:44.309 13:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:44.309 13:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:44.309 13:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:44.309 13:57:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:44.309 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:44.309 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:44.309 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:44.309 13:57:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:44.566 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:44.566 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:44.566 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:44.566 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:44.566 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:44.566 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:44.566 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:44.566 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:44.566 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:44.824 request: 00:18:44.824 { 00:18:44.824 "name": "nvme0", 00:18:44.824 "trtype": "tcp", 00:18:44.824 "traddr": "10.0.0.2", 00:18:44.824 "adrfam": "ipv4", 00:18:44.824 "trsvcid": "4420", 00:18:44.824 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:44.824 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:44.824 "prchk_reftag": false, 00:18:44.824 "prchk_guard": false, 00:18:44.824 "hdgst": false, 00:18:44.824 "ddgst": false, 00:18:44.824 "dhchap_key": "key3", 00:18:44.824 "method": "bdev_nvme_attach_controller", 00:18:44.824 "req_id": 1 00:18:44.824 } 00:18:44.824 Got JSON-RPC error response 00:18:44.824 response: 00:18:44.824 { 00:18:44.824 "code": -5, 00:18:44.824 "message": "Input/output error" 00:18:44.824 } 00:18:44.824 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:44.824 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:44.824 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:44.824 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:44.824 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:44.824 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:44.824 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:44.824 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:44.824 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:44.824 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:45.082 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:45.082 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.082 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.082 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.082 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:45.082 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.082 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.082 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.082 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:45.082 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:45.082 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:45.082 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:45.082 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:45.082 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:45.082 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:45.082 13:57:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:45.082 13:57:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:45.340 request: 00:18:45.340 { 00:18:45.340 "name": "nvme0", 00:18:45.340 "trtype": "tcp", 00:18:45.340 "traddr": "10.0.0.2", 00:18:45.340 "adrfam": "ipv4", 00:18:45.340 "trsvcid": "4420", 00:18:45.340 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:45.340 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:45.340 "prchk_reftag": false, 00:18:45.340 "prchk_guard": false, 00:18:45.340 "hdgst": false, 00:18:45.340 "ddgst": false, 00:18:45.340 "dhchap_key": "key0", 00:18:45.340 "dhchap_ctrlr_key": "key1", 00:18:45.340 "method": "bdev_nvme_attach_controller", 00:18:45.340 "req_id": 1 00:18:45.340 } 00:18:45.340 Got JSON-RPC error response 00:18:45.340 response: 00:18:45.340 { 00:18:45.340 "code": -5, 00:18:45.340 "message": "Input/output error" 00:18:45.340 } 00:18:45.340 13:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:45.340 13:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:45.340 13:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:45.340 13:57:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:45.340 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:45.340 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:45.599 00:18:45.857 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:45.857 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:45.857 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.857 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.857 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.857 13:57:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.422 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:46.422 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:46.423 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3747396 00:18:46.423 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3747396 ']' 00:18:46.423 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3747396 00:18:46.423 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:46.423 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:46.423 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3747396 00:18:46.423 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:46.423 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:46.423 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3747396' 00:18:46.423 killing process with pid 3747396 00:18:46.423 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3747396 00:18:46.423 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3747396 00:18:46.681 13:57:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:46.681 13:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:46.681 13:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:46.681 13:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:46.681 13:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:46.681 13:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:46.681 13:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:46.681 rmmod nvme_tcp 00:18:46.681 rmmod nvme_fabrics 00:18:46.681 rmmod nvme_keyring 00:18:46.939 13:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:46.940 13:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:46.940 13:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:46.940 13:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3769212 ']' 00:18:46.940 13:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3769212 00:18:46.940 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3769212 ']' 00:18:46.940 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3769212 00:18:46.940 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:46.940 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:46.940 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3769212 00:18:46.940 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:46.940 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:46.940 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3769212' 00:18:46.940 killing process with pid 3769212 00:18:46.940 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3769212 00:18:46.940 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3769212 00:18:47.199 13:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:47.199 13:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:47.199 13:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:47.199 13:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:47.199 13:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:47.199 13:57:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.199 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.199 13:57:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.103 13:57:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:49.103 13:57:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.dOk /tmp/spdk.key-sha256.4VU /tmp/spdk.key-sha384.aI5 /tmp/spdk.key-sha512.xHy /tmp/spdk.key-sha512.KVA /tmp/spdk.key-sha384.MYz /tmp/spdk.key-sha256.d0r '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:49.103 00:18:49.103 real 3m2.385s 00:18:49.103 user 7m6.352s 00:18:49.103 sys 0m25.339s 00:18:49.103 13:57:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:49.103 13:57:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.103 ************************************ 00:18:49.103 END TEST nvmf_auth_target 00:18:49.103 ************************************ 00:18:49.103 13:57:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:49.103 13:57:43 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:18:49.103 13:57:43 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:49.103 13:57:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:49.103 13:57:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:49.103 13:57:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:49.103 ************************************ 00:18:49.103 START TEST nvmf_bdevio_no_huge 00:18:49.103 ************************************ 00:18:49.103 13:57:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:49.362 * Looking for test storage... 00:18:49.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:49.362 13:57:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:49.362 13:57:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:49.362 13:57:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.362 13:57:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.362 13:57:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.362 13:57:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.362 13:57:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.362 13:57:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.362 13:57:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.362 13:57:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.362 13:57:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.362 13:57:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.362 13:57:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:18:49.362 13:57:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.896 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:51.896 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:18:51.896 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:51.896 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:51.896 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:51.896 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:51.896 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:51.896 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:51.897 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:51.897 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:51.897 Found net devices under 0000:84:00.0: cvl_0_0 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:51.897 Found net devices under 0000:84:00.1: cvl_0_1 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:51.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:51.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:18:51.897 00:18:51.897 --- 10.0.0.2 ping statistics --- 00:18:51.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.897 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:51.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:51.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:18:51.897 00:18:51.897 --- 10.0.0.1 ping statistics --- 00:18:51.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.897 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3771885 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3771885 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 3771885 ']' 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:51.897 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.897 [2024-07-15 13:57:46.355930] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:18:51.897 [2024-07-15 13:57:46.356002] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:51.897 [2024-07-15 13:57:46.428615] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:51.897 [2024-07-15 13:57:46.534916] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.898 [2024-07-15 13:57:46.534973] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.898 [2024-07-15 13:57:46.535003] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:51.898 [2024-07-15 13:57:46.535015] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:51.898 [2024-07-15 13:57:46.535024] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.898 [2024-07-15 13:57:46.535116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:51.898 [2024-07-15 13:57:46.535204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:51.898 [2024-07-15 13:57:46.535207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:51.898 [2024-07-15 13:57:46.535147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.898 [2024-07-15 13:57:46.647625] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.898 Malloc0 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.898 [2024-07-15 13:57:46.685182] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:51.898 { 00:18:51.898 "params": { 00:18:51.898 "name": "Nvme$subsystem", 00:18:51.898 "trtype": "$TEST_TRANSPORT", 00:18:51.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:51.898 "adrfam": "ipv4", 00:18:51.898 "trsvcid": "$NVMF_PORT", 00:18:51.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:51.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:51.898 "hdgst": ${hdgst:-false}, 00:18:51.898 "ddgst": ${ddgst:-false} 00:18:51.898 }, 00:18:51.898 "method": "bdev_nvme_attach_controller" 00:18:51.898 } 00:18:51.898 EOF 00:18:51.898 )") 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:18:51.898 13:57:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:51.898 "params": { 00:18:51.898 "name": "Nvme1", 00:18:51.898 "trtype": "tcp", 00:18:51.898 "traddr": "10.0.0.2", 00:18:51.898 "adrfam": "ipv4", 00:18:51.898 "trsvcid": "4420", 00:18:51.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:51.898 "hdgst": false, 00:18:51.898 "ddgst": false 00:18:51.898 }, 00:18:51.898 "method": "bdev_nvme_attach_controller" 00:18:51.898 }' 00:18:51.898 [2024-07-15 13:57:46.727670] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:18:51.898 [2024-07-15 13:57:46.727783] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3772029 ] 00:18:52.157 [2024-07-15 13:57:46.791816] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:52.157 [2024-07-15 13:57:46.906948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.157 [2024-07-15 13:57:46.907001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:52.157 [2024-07-15 13:57:46.907005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.415 I/O targets: 00:18:52.415 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:52.415 00:18:52.415 00:18:52.415 CUnit - A unit testing framework for C - Version 2.1-3 00:18:52.415 http://cunit.sourceforge.net/ 00:18:52.415 00:18:52.415 00:18:52.415 Suite: bdevio tests on: Nvme1n1 00:18:52.415 Test: blockdev write read block ...passed 00:18:52.415 Test: blockdev write zeroes read block ...passed 00:18:52.415 Test: blockdev write zeroes read no split ...passed 00:18:52.415 Test: blockdev write zeroes read split ...passed 00:18:52.673 Test: blockdev write zeroes read split partial ...passed 00:18:52.674 Test: blockdev reset ...[2024-07-15 13:57:47.274353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:52.674 [2024-07-15 13:57:47.274460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e88670 (9): Bad file descriptor 00:18:52.674 [2024-07-15 13:57:47.370267] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:52.674 passed 00:18:52.674 Test: blockdev write read 8 blocks ...passed 00:18:52.674 Test: blockdev write read size > 128k ...passed 00:18:52.674 Test: blockdev write read invalid size ...passed 00:18:52.674 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:52.674 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:52.674 Test: blockdev write read max offset ...passed 00:18:52.674 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:52.933 Test: blockdev writev readv 8 blocks ...passed 00:18:52.933 Test: blockdev writev readv 30 x 1block ...passed 00:18:52.933 Test: blockdev writev readv block ...passed 00:18:52.933 Test: blockdev writev readv size > 128k ...passed 00:18:52.933 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:52.933 Test: blockdev comparev and writev ...[2024-07-15 13:57:47.625467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.933 [2024-07-15 13:57:47.625502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:52.933 [2024-07-15 13:57:47.625526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.933 [2024-07-15 13:57:47.625544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:52.933 [2024-07-15 13:57:47.625923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.933 [2024-07-15 13:57:47.625949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:52.933 [2024-07-15 13:57:47.625972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.933 [2024-07-15 13:57:47.625988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:52.933 [2024-07-15 13:57:47.626402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.933 [2024-07-15 13:57:47.626425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:52.933 [2024-07-15 13:57:47.626445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.933 [2024-07-15 13:57:47.626461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:52.933 [2024-07-15 13:57:47.626832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.933 [2024-07-15 13:57:47.626855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.933 [2024-07-15 13:57:47.626877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:52.933 [2024-07-15 13:57:47.626892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:52.933 passed 00:18:52.933 Test: blockdev nvme passthru rw ...passed 00:18:52.933 Test: blockdev nvme passthru vendor specific ...[2024-07-15 13:57:47.710125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:52.933 [2024-07-15 13:57:47.710154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:52.933 [2024-07-15 13:57:47.710312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:52.933 [2024-07-15 13:57:47.710335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:52.933 [2024-07-15 13:57:47.710489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:52.933 [2024-07-15 13:57:47.710512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:52.933 [2024-07-15 13:57:47.710667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:52.933 [2024-07-15 13:57:47.710688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:52.933 passed 00:18:52.933 Test: blockdev nvme admin passthru ...passed 00:18:52.933 Test: blockdev copy ...passed 00:18:52.933 00:18:52.933 Run Summary: Type Total Ran Passed Failed Inactive 00:18:52.933 suites 1 1 n/a 0 0 00:18:52.933 tests 23 23 23 0 0 00:18:52.933 asserts 152 152 152 0 n/a 00:18:52.933 00:18:52.933 Elapsed time = 1.314 seconds 00:18:53.499 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:53.499 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.499 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.499 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.499 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:53.499 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:53.499 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:53.499 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:53.499 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:53.499 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:53.499 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:53.499 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:53.499 rmmod nvme_tcp 00:18:53.499 rmmod nvme_fabrics 00:18:53.499 rmmod nvme_keyring 00:18:53.499 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:53.499 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:53.499 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:53.499 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3771885 ']' 00:18:53.499 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3771885 00:18:53.499 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 3771885 ']' 00:18:53.499 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 3771885 00:18:53.499 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:18:53.499 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:53.499 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3771885 00:18:53.499 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:53.499 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:53.499 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3771885' 00:18:53.499 killing process with pid 3771885 00:18:53.499 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 3771885 00:18:53.499 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 3771885 00:18:54.068 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:54.068 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:54.068 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:54.068 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:54.068 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:54.068 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.069 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:54.069 13:57:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.008 13:57:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:56.008 00:18:56.008 real 0m6.709s 00:18:56.008 user 0m10.954s 00:18:56.008 sys 0m2.626s 00:18:56.008 13:57:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:56.008 13:57:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:56.008 ************************************ 00:18:56.008 END TEST nvmf_bdevio_no_huge 00:18:56.008 ************************************ 00:18:56.008 13:57:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:56.008 13:57:50 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:56.008 13:57:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:56.008 13:57:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:56.008 13:57:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:56.008 ************************************ 00:18:56.008 START TEST nvmf_tls 00:18:56.008 ************************************ 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:56.008 * Looking for test storage... 00:18:56.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:18:56.008 13:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:58.535 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:58.535 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:58.535 Found net devices under 0000:84:00.0: cvl_0_0 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.535 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:58.536 Found net devices under 0000:84:00.1: cvl_0_1 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:58.536 13:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:58.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:58.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:18:58.536 00:18:58.536 --- 10.0.0.2 ping statistics --- 00:18:58.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.536 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:58.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:58.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:18:58.536 00:18:58.536 --- 10.0.0.1 ping statistics --- 00:18:58.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.536 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3774120 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3774120 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3774120 ']' 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.536 [2024-07-15 13:57:53.119945] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:18:58.536 [2024-07-15 13:57:53.120030] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.536 EAL: No free 2048 kB hugepages reported on node 1 00:18:58.536 [2024-07-15 13:57:53.190238] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.536 [2024-07-15 13:57:53.302706] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.536 [2024-07-15 13:57:53.302780] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.536 [2024-07-15 13:57:53.302811] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.536 [2024-07-15 13:57:53.302823] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.536 [2024-07-15 13:57:53.302833] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.536 [2024-07-15 13:57:53.302866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:58.536 13:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:58.794 true 00:18:58.794 13:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:58.794 13:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:59.052 13:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:59.052 13:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:59.052 13:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:59.309 13:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:59.309 13:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:59.566 13:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:59.566 13:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:59.566 13:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:59.824 13:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:59.824 13:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:19:00.081 13:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:19:00.081 13:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:19:00.081 13:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:00.081 13:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:19:00.339 13:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:19:00.339 13:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:19:00.339 13:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:00.596 13:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:00.596 13:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:19:00.854 13:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:19:00.854 13:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:19:00.854 13:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:01.112 13:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:01.112 13:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:19:01.370 13:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:19:01.370 13:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:19:01.370 13:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:01.370 13:57:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:01.370 13:57:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:01.370 13:57:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:01.370 13:57:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:19:01.370 13:57:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:01.370 13:57:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:01.370 13:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:01.370 13:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:01.370 13:57:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:01.370 13:57:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:01.370 13:57:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:01.370 13:57:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:19:01.370 13:57:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:01.370 13:57:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:01.370 13:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:01.370 13:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:19:01.370 13:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.Ca5xJnJ5bC 00:19:01.370 13:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:01.370 13:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.iDIJMhKbAr 00:19:01.370 13:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:01.370 13:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:01.370 13:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.Ca5xJnJ5bC 00:19:01.370 13:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.iDIJMhKbAr 00:19:01.370 13:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:01.628 13:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:02.193 13:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.Ca5xJnJ5bC 00:19:02.193 13:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Ca5xJnJ5bC 00:19:02.193 13:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:02.450 [2024-07-15 13:57:57.077943] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:02.450 13:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:02.708 13:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:02.965 [2024-07-15 13:57:57.623442] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:02.965 [2024-07-15 13:57:57.623644] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.965 13:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:03.222 malloc0 00:19:03.222 13:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:03.479 13:57:58 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ca5xJnJ5bC 00:19:03.735 [2024-07-15 13:57:58.347588] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:03.735 13:57:58 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Ca5xJnJ5bC 00:19:03.735 EAL: No free 2048 kB hugepages reported on node 1 00:19:13.698 Initializing NVMe Controllers 00:19:13.698 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:13.698 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:13.698 Initialization complete. Launching workers. 00:19:13.698 ======================================================== 00:19:13.698 Latency(us) 00:19:13.698 Device Information : IOPS MiB/s Average min max 00:19:13.698 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8693.05 33.96 7364.21 1032.84 9140.42 00:19:13.698 ======================================================== 00:19:13.698 Total : 8693.05 33.96 7364.21 1032.84 9140.42 00:19:13.698 00:19:13.698 13:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ca5xJnJ5bC 00:19:13.698 13:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:13.698 13:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:13.698 13:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:13.698 13:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Ca5xJnJ5bC' 00:19:13.698 13:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:13.698 13:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3776006 00:19:13.698 13:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:13.698 13:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:13.698 13:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3776006 /var/tmp/bdevperf.sock 00:19:13.698 13:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3776006 ']' 00:19:13.698 13:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:13.698 13:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:13.698 13:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:13.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:13.698 13:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:13.698 13:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.698 [2024-07-15 13:58:08.519873] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:19:13.698 [2024-07-15 13:58:08.519949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3776006 ] 00:19:13.956 EAL: No free 2048 kB hugepages reported on node 1 00:19:13.956 [2024-07-15 13:58:08.577394] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.956 [2024-07-15 13:58:08.682825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.956 13:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:13.956 13:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:13.956 13:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ca5xJnJ5bC 00:19:14.213 [2024-07-15 13:58:09.052901] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:14.213 [2024-07-15 13:58:09.053038] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:14.471 TLSTESTn1 00:19:14.471 13:58:09 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:14.471 Running I/O for 10 seconds... 00:19:26.665 00:19:26.665 Latency(us) 00:19:26.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.665 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:26.665 Verification LBA range: start 0x0 length 0x2000 00:19:26.665 TLSTESTn1 : 10.02 3697.86 14.44 0.00 0.00 34558.04 7475.96 39807.05 00:19:26.665 =================================================================================================================== 00:19:26.665 Total : 3697.86 14.44 0.00 0.00 34558.04 7475.96 39807.05 00:19:26.665 0 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3776006 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3776006 ']' 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3776006 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3776006 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3776006' 00:19:26.665 killing process with pid 3776006 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3776006 00:19:26.665 Received shutdown signal, test time was about 10.000000 seconds 00:19:26.665 00:19:26.665 Latency(us) 00:19:26.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.665 =================================================================================================================== 00:19:26.665 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:26.665 [2024-07-15 13:58:19.330199] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3776006 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iDIJMhKbAr 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iDIJMhKbAr 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iDIJMhKbAr 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.iDIJMhKbAr' 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3777323 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3777323 /var/tmp/bdevperf.sock 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3777323 ']' 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:26.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.665 [2024-07-15 13:58:19.619531] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:19:26.665 [2024-07-15 13:58:19.619606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3777323 ] 00:19:26.665 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.665 [2024-07-15 13:58:19.677599] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.665 [2024-07-15 13:58:19.791759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:26.665 13:58:19 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iDIJMhKbAr 00:19:26.666 [2024-07-15 13:58:20.190863] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:26.666 [2024-07-15 13:58:20.191038] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:26.666 [2024-07-15 13:58:20.200117] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:26.666 [2024-07-15 13:58:20.200813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x122b6d0 (107): Transport endpoint is not connected 00:19:26.666 [2024-07-15 13:58:20.201805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x122b6d0 (9): Bad file descriptor 00:19:26.666 [2024-07-15 13:58:20.202804] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:26.666 [2024-07-15 13:58:20.202824] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:26.666 [2024-07-15 13:58:20.202842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:26.666 request: 00:19:26.666 { 00:19:26.666 "name": "TLSTEST", 00:19:26.666 "trtype": "tcp", 00:19:26.666 "traddr": "10.0.0.2", 00:19:26.666 "adrfam": "ipv4", 00:19:26.666 "trsvcid": "4420", 00:19:26.666 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.666 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:26.666 "prchk_reftag": false, 00:19:26.666 "prchk_guard": false, 00:19:26.666 "hdgst": false, 00:19:26.666 "ddgst": false, 00:19:26.666 "psk": "/tmp/tmp.iDIJMhKbAr", 00:19:26.666 "method": "bdev_nvme_attach_controller", 00:19:26.666 "req_id": 1 00:19:26.666 } 00:19:26.666 Got JSON-RPC error response 00:19:26.666 response: 00:19:26.666 { 00:19:26.666 "code": -5, 00:19:26.666 "message": "Input/output error" 00:19:26.666 } 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3777323 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3777323 ']' 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3777323 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3777323 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3777323' 00:19:26.666 killing process with pid 3777323 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3777323 00:19:26.666 Received shutdown signal, test time was about 10.000000 seconds 00:19:26.666 00:19:26.666 Latency(us) 00:19:26.666 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.666 =================================================================================================================== 00:19:26.666 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:26.666 [2024-07-15 13:58:20.254858] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3777323 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Ca5xJnJ5bC 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Ca5xJnJ5bC 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Ca5xJnJ5bC 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Ca5xJnJ5bC' 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3777378 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3777378 /var/tmp/bdevperf.sock 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3777378 ']' 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:26.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.666 [2024-07-15 13:58:20.558847] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:19:26.666 [2024-07-15 13:58:20.558940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3777378 ] 00:19:26.666 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.666 [2024-07-15 13:58:20.622202] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.666 [2024-07-15 13:58:20.732374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:26.666 13:58:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.Ca5xJnJ5bC 00:19:26.666 [2024-07-15 13:58:21.053105] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:26.666 [2024-07-15 13:58:21.053215] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:26.666 [2024-07-15 13:58:21.062239] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:26.666 [2024-07-15 13:58:21.062268] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:26.666 [2024-07-15 13:58:21.062330] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:26.666 [2024-07-15 13:58:21.063009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd086d0 (107): Transport endpoint is not connected 00:19:26.666 [2024-07-15 13:58:21.064002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd086d0 (9): Bad file descriptor 00:19:26.666 [2024-07-15 13:58:21.065001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:26.666 [2024-07-15 13:58:21.065020] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:26.666 [2024-07-15 13:58:21.065051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:26.666 request: 00:19:26.666 { 00:19:26.666 "name": "TLSTEST", 00:19:26.666 "trtype": "tcp", 00:19:26.666 "traddr": "10.0.0.2", 00:19:26.666 "adrfam": "ipv4", 00:19:26.666 "trsvcid": "4420", 00:19:26.666 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.666 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:26.666 "prchk_reftag": false, 00:19:26.666 "prchk_guard": false, 00:19:26.666 "hdgst": false, 00:19:26.666 "ddgst": false, 00:19:26.666 "psk": "/tmp/tmp.Ca5xJnJ5bC", 00:19:26.666 "method": "bdev_nvme_attach_controller", 00:19:26.666 "req_id": 1 00:19:26.666 } 00:19:26.666 Got JSON-RPC error response 00:19:26.666 response: 00:19:26.666 { 00:19:26.666 "code": -5, 00:19:26.666 "message": "Input/output error" 00:19:26.666 } 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3777378 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3777378 ']' 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3777378 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3777378 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3777378' 00:19:26.666 killing process with pid 3777378 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3777378 00:19:26.666 Received shutdown signal, test time was about 10.000000 seconds 00:19:26.666 00:19:26.666 Latency(us) 00:19:26.666 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.666 =================================================================================================================== 00:19:26.666 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:26.666 [2024-07-15 13:58:21.108336] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3777378 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ca5xJnJ5bC 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ca5xJnJ5bC 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ca5xJnJ5bC 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Ca5xJnJ5bC' 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3777482 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3777482 /var/tmp/bdevperf.sock 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3777482 ']' 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:26.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:26.666 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.666 [2024-07-15 13:58:21.397069] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:19:26.666 [2024-07-15 13:58:21.397167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3777482 ] 00:19:26.666 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.666 [2024-07-15 13:58:21.454270] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.923 [2024-07-15 13:58:21.557993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.923 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:26.923 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:26.924 13:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ca5xJnJ5bC 00:19:27.181 [2024-07-15 13:58:21.903638] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:27.181 [2024-07-15 13:58:21.903791] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:27.181 [2024-07-15 13:58:21.914416] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:27.181 [2024-07-15 13:58:21.914445] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:27.181 [2024-07-15 13:58:21.914500] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:27.181 [2024-07-15 13:58:21.914621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xebe6d0 (107): Transport endpoint is not connected 00:19:27.181 [2024-07-15 13:58:21.915613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xebe6d0 (9): Bad file descriptor 00:19:27.181 [2024-07-15 13:58:21.916612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:27.181 [2024-07-15 13:58:21.916631] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:27.181 [2024-07-15 13:58:21.916661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:27.181 request: 00:19:27.181 { 00:19:27.181 "name": "TLSTEST", 00:19:27.181 "trtype": "tcp", 00:19:27.181 "traddr": "10.0.0.2", 00:19:27.181 "adrfam": "ipv4", 00:19:27.181 "trsvcid": "4420", 00:19:27.181 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:27.181 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:27.181 "prchk_reftag": false, 00:19:27.181 "prchk_guard": false, 00:19:27.181 "hdgst": false, 00:19:27.181 "ddgst": false, 00:19:27.181 "psk": "/tmp/tmp.Ca5xJnJ5bC", 00:19:27.181 "method": "bdev_nvme_attach_controller", 00:19:27.181 "req_id": 1 00:19:27.181 } 00:19:27.181 Got JSON-RPC error response 00:19:27.181 response: 00:19:27.181 { 00:19:27.181 "code": -5, 00:19:27.181 "message": "Input/output error" 00:19:27.181 } 00:19:27.181 13:58:21 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3777482 00:19:27.181 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3777482 ']' 00:19:27.181 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3777482 00:19:27.181 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:27.181 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:27.181 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3777482 00:19:27.181 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:27.181 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:27.181 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3777482' 00:19:27.181 killing process with pid 3777482 00:19:27.181 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3777482 00:19:27.181 Received shutdown signal, test time was about 10.000000 seconds 00:19:27.181 00:19:27.181 Latency(us) 00:19:27.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.181 =================================================================================================================== 00:19:27.181 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:27.181 [2024-07-15 13:58:21.967196] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:27.181 13:58:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3777482 00:19:27.439 13:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:27.439 13:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:27.439 13:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:27.439 13:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:27.439 13:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:27.439 13:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:27.439 13:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:27.439 13:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:27.439 13:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:27.439 13:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:27.439 13:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:27.439 13:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:27.439 13:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:27.439 13:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:27.439 13:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:27.439 13:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:27.439 13:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:27.439 13:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:27.439 13:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3777621 00:19:27.439 13:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:27.439 13:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:27.440 13:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3777621 /var/tmp/bdevperf.sock 00:19:27.440 13:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3777621 ']' 00:19:27.440 13:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:27.440 13:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:27.440 13:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:27.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:27.440 13:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:27.440 13:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.440 [2024-07-15 13:58:22.272474] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:19:27.440 [2024-07-15 13:58:22.272554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3777621 ] 00:19:27.697 EAL: No free 2048 kB hugepages reported on node 1 00:19:27.697 [2024-07-15 13:58:22.332106] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.697 [2024-07-15 13:58:22.440175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:27.954 13:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:27.954 13:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:27.954 13:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:28.211 [2024-07-15 13:58:22.826836] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:28.211 [2024-07-15 13:58:22.829016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x956e10 (9): Bad file descriptor 00:19:28.211 [2024-07-15 13:58:22.830011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:28.211 [2024-07-15 13:58:22.830032] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:28.211 [2024-07-15 13:58:22.830064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:28.211 request: 00:19:28.211 { 00:19:28.211 "name": "TLSTEST", 00:19:28.211 "trtype": "tcp", 00:19:28.211 "traddr": "10.0.0.2", 00:19:28.211 "adrfam": "ipv4", 00:19:28.211 "trsvcid": "4420", 00:19:28.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.211 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:28.211 "prchk_reftag": false, 00:19:28.211 "prchk_guard": false, 00:19:28.211 "hdgst": false, 00:19:28.211 "ddgst": false, 00:19:28.211 "method": "bdev_nvme_attach_controller", 00:19:28.211 "req_id": 1 00:19:28.211 } 00:19:28.211 Got JSON-RPC error response 00:19:28.211 response: 00:19:28.211 { 00:19:28.211 "code": -5, 00:19:28.211 "message": "Input/output error" 00:19:28.211 } 00:19:28.211 13:58:22 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3777621 00:19:28.211 13:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3777621 ']' 00:19:28.211 13:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3777621 00:19:28.211 13:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:28.211 13:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:28.211 13:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3777621 00:19:28.211 13:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:28.211 13:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:28.211 13:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3777621' 00:19:28.211 killing process with pid 3777621 00:19:28.211 13:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3777621 00:19:28.211 Received shutdown signal, test time was about 10.000000 seconds 00:19:28.211 00:19:28.211 Latency(us) 00:19:28.211 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.211 =================================================================================================================== 00:19:28.211 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:28.211 13:58:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3777621 00:19:28.469 13:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:28.469 13:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:28.469 13:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:28.469 13:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:28.469 13:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:28.469 13:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3774120 00:19:28.469 13:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3774120 ']' 00:19:28.469 13:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3774120 00:19:28.469 13:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:28.469 13:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:28.469 13:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3774120 00:19:28.469 13:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:28.469 13:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:28.469 13:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3774120' 00:19:28.469 killing process with pid 3774120 00:19:28.469 13:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3774120 00:19:28.469 [2024-07-15 13:58:23.157215] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:28.469 13:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3774120 00:19:28.729 13:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:28.729 13:58:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:28.729 13:58:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:28.729 13:58:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:28.729 13:58:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:28.729 13:58:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:28.729 13:58:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:28.729 13:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:28.729 13:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:28.729 13:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.99HwghmHBL 00:19:28.729 13:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:28.729 13:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.99HwghmHBL 00:19:28.729 13:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:28.729 13:58:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:28.729 13:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:28.729 13:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.729 13:58:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3777770 00:19:28.729 13:58:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:28.729 13:58:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3777770 00:19:28.729 13:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3777770 ']' 00:19:28.729 13:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.729 13:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:28.729 13:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.729 13:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:28.729 13:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.729 [2024-07-15 13:58:23.538175] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:19:28.729 [2024-07-15 13:58:23.538268] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.988 EAL: No free 2048 kB hugepages reported on node 1 00:19:28.988 [2024-07-15 13:58:23.601069] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.988 [2024-07-15 13:58:23.698296] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.988 [2024-07-15 13:58:23.698353] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.988 [2024-07-15 13:58:23.698387] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:28.988 [2024-07-15 13:58:23.698398] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:28.988 [2024-07-15 13:58:23.698407] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.988 [2024-07-15 13:58:23.698435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.988 13:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:28.988 13:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:28.988 13:58:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:28.988 13:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:28.988 13:58:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.246 13:58:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.246 13:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.99HwghmHBL 00:19:29.246 13:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.99HwghmHBL 00:19:29.246 13:58:23 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:29.504 [2024-07-15 13:58:24.096173] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.504 13:58:24 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:29.761 13:58:24 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:29.761 [2024-07-15 13:58:24.581451] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:29.761 [2024-07-15 13:58:24.581703] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.761 13:58:24 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:30.327 malloc0 00:19:30.327 13:58:24 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:30.327 13:58:25 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.99HwghmHBL 00:19:30.584 [2024-07-15 13:58:25.390161] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:30.584 13:58:25 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.99HwghmHBL 00:19:30.584 13:58:25 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:30.584 13:58:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:30.584 13:58:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:30.584 13:58:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.99HwghmHBL' 00:19:30.584 13:58:25 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:30.584 13:58:25 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3778052 00:19:30.584 13:58:25 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:30.584 13:58:25 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:30.584 13:58:25 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3778052 /var/tmp/bdevperf.sock 00:19:30.584 13:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3778052 ']' 00:19:30.584 13:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:30.584 13:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:30.584 13:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:30.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:30.584 13:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:30.584 13:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.843 [2024-07-15 13:58:25.453884] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:19:30.843 [2024-07-15 13:58:25.453947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3778052 ] 00:19:30.843 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.843 [2024-07-15 13:58:25.514613] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.843 [2024-07-15 13:58:25.624377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.100 13:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:31.100 13:58:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:31.100 13:58:25 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.99HwghmHBL 00:19:31.358 [2024-07-15 13:58:25.947904] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:31.358 [2024-07-15 13:58:25.948035] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:31.358 TLSTESTn1 00:19:31.358 13:58:26 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:31.358 Running I/O for 10 seconds... 00:19:41.390 00:19:41.390 Latency(us) 00:19:41.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.390 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:41.390 Verification LBA range: start 0x0 length 0x2000 00:19:41.390 TLSTESTn1 : 10.02 3081.38 12.04 0.00 0.00 41475.64 6602.15 45826.65 00:19:41.390 =================================================================================================================== 00:19:41.390 Total : 3081.38 12.04 0.00 0.00 41475.64 6602.15 45826.65 00:19:41.390 0 00:19:41.390 13:58:36 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:41.390 13:58:36 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3778052 00:19:41.390 13:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3778052 ']' 00:19:41.390 13:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3778052 00:19:41.390 13:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:41.390 13:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:41.390 13:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3778052 00:19:41.390 13:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:41.390 13:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:41.390 13:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3778052' 00:19:41.390 killing process with pid 3778052 00:19:41.390 13:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3778052 00:19:41.390 Received shutdown signal, test time was about 10.000000 seconds 00:19:41.390 00:19:41.390 Latency(us) 00:19:41.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.390 =================================================================================================================== 00:19:41.390 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:41.390 [2024-07-15 13:58:36.217372] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:41.390 13:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3778052 00:19:41.648 13:58:36 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.99HwghmHBL 00:19:41.648 13:58:36 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.99HwghmHBL 00:19:41.648 13:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:41.648 13:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.99HwghmHBL 00:19:41.648 13:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:41.648 13:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:41.648 13:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:41.648 13:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:41.648 13:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.99HwghmHBL 00:19:41.648 13:58:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:41.648 13:58:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:41.648 13:58:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:41.648 13:58:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.99HwghmHBL' 00:19:41.648 13:58:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:41.648 13:58:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3779370 00:19:41.648 13:58:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:41.648 13:58:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:41.648 13:58:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3779370 /var/tmp/bdevperf.sock 00:19:41.648 13:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3779370 ']' 00:19:41.648 13:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:41.648 13:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:41.648 13:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:41.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:41.648 13:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:41.648 13:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.906 [2024-07-15 13:58:36.527798] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:19:41.906 [2024-07-15 13:58:36.527877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3779370 ] 00:19:41.906 EAL: No free 2048 kB hugepages reported on node 1 00:19:41.906 [2024-07-15 13:58:36.586210] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.906 [2024-07-15 13:58:36.689549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.163 13:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:42.163 13:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:42.163 13:58:36 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.99HwghmHBL 00:19:42.421 [2024-07-15 13:58:37.018465] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:42.421 [2024-07-15 13:58:37.018556] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:42.421 [2024-07-15 13:58:37.018570] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.99HwghmHBL 00:19:42.421 request: 00:19:42.421 { 00:19:42.421 "name": "TLSTEST", 00:19:42.421 "trtype": "tcp", 00:19:42.421 "traddr": "10.0.0.2", 00:19:42.421 "adrfam": "ipv4", 00:19:42.421 "trsvcid": "4420", 00:19:42.421 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.421 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:42.421 "prchk_reftag": false, 00:19:42.421 "prchk_guard": false, 00:19:42.421 "hdgst": false, 00:19:42.421 "ddgst": false, 00:19:42.421 "psk": "/tmp/tmp.99HwghmHBL", 00:19:42.421 "method": "bdev_nvme_attach_controller", 00:19:42.421 "req_id": 1 00:19:42.421 } 00:19:42.421 Got JSON-RPC error response 00:19:42.421 response: 00:19:42.421 { 00:19:42.421 "code": -1, 00:19:42.421 "message": "Operation not permitted" 00:19:42.421 } 00:19:42.421 13:58:37 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3779370 00:19:42.421 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3779370 ']' 00:19:42.421 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3779370 00:19:42.421 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:42.421 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:42.421 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3779370 00:19:42.421 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:42.421 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:42.421 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3779370' 00:19:42.421 killing process with pid 3779370 00:19:42.421 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3779370 00:19:42.421 Received shutdown signal, test time was about 10.000000 seconds 00:19:42.421 00:19:42.421 Latency(us) 00:19:42.421 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.421 =================================================================================================================== 00:19:42.421 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:42.421 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3779370 00:19:42.702 13:58:37 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:42.702 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:42.702 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:42.702 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:42.702 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:42.702 13:58:37 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3777770 00:19:42.702 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3777770 ']' 00:19:42.703 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3777770 00:19:42.703 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:42.703 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:42.703 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3777770 00:19:42.703 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:42.703 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:42.703 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3777770' 00:19:42.703 killing process with pid 3777770 00:19:42.703 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3777770 00:19:42.703 [2024-07-15 13:58:37.358847] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:42.703 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3777770 00:19:42.959 13:58:37 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:42.959 13:58:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:42.959 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:42.959 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.959 13:58:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3779514 00:19:42.959 13:58:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:42.959 13:58:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3779514 00:19:42.959 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3779514 ']' 00:19:42.959 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.959 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:42.959 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.959 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:42.959 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.959 [2024-07-15 13:58:37.689943] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:19:42.959 [2024-07-15 13:58:37.690021] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.959 EAL: No free 2048 kB hugepages reported on node 1 00:19:42.959 [2024-07-15 13:58:37.752219] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.217 [2024-07-15 13:58:37.859678] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:43.217 [2024-07-15 13:58:37.859732] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:43.217 [2024-07-15 13:58:37.859769] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:43.217 [2024-07-15 13:58:37.859782] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:43.217 [2024-07-15 13:58:37.859792] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:43.217 [2024-07-15 13:58:37.859834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.217 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:43.217 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:43.217 13:58:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:43.217 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:43.217 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.217 13:58:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.217 13:58:37 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.99HwghmHBL 00:19:43.217 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:43.217 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.99HwghmHBL 00:19:43.217 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:19:43.217 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.217 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:19:43.217 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.217 13:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.99HwghmHBL 00:19:43.217 13:58:37 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.99HwghmHBL 00:19:43.217 13:58:37 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:43.474 [2024-07-15 13:58:38.223581] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:43.474 13:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:43.730 13:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:43.988 [2024-07-15 13:58:38.805169] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:43.988 [2024-07-15 13:58:38.805395] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:43.989 13:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:44.554 malloc0 00:19:44.554 13:58:39 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:44.554 13:58:39 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.99HwghmHBL 00:19:44.811 [2024-07-15 13:58:39.558105] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:44.811 [2024-07-15 13:58:39.558139] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:44.811 [2024-07-15 13:58:39.558185] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:44.811 request: 00:19:44.811 { 00:19:44.811 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.811 "host": "nqn.2016-06.io.spdk:host1", 00:19:44.811 "psk": "/tmp/tmp.99HwghmHBL", 00:19:44.811 "method": "nvmf_subsystem_add_host", 00:19:44.811 "req_id": 1 00:19:44.811 } 00:19:44.811 Got JSON-RPC error response 00:19:44.811 response: 00:19:44.811 { 00:19:44.811 "code": -32603, 00:19:44.811 "message": "Internal error" 00:19:44.811 } 00:19:44.811 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:44.811 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:44.811 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:44.811 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:44.811 13:58:39 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3779514 00:19:44.811 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3779514 ']' 00:19:44.811 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3779514 00:19:44.811 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:44.811 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:44.811 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3779514 00:19:44.811 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:44.811 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:44.811 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3779514' 00:19:44.811 killing process with pid 3779514 00:19:44.811 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3779514 00:19:44.811 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3779514 00:19:45.069 13:58:39 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.99HwghmHBL 00:19:45.069 13:58:39 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:45.069 13:58:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:45.069 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:45.069 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.069 13:58:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3779807 00:19:45.069 13:58:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:45.069 13:58:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3779807 00:19:45.069 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3779807 ']' 00:19:45.069 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.069 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:45.069 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.069 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:45.069 13:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.327 [2024-07-15 13:58:39.933489] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:19:45.327 [2024-07-15 13:58:39.933583] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:45.327 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.327 [2024-07-15 13:58:39.997174] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.327 [2024-07-15 13:58:40.112497] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:45.327 [2024-07-15 13:58:40.112569] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:45.327 [2024-07-15 13:58:40.112598] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:45.327 [2024-07-15 13:58:40.112610] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:45.327 [2024-07-15 13:58:40.112620] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:45.327 [2024-07-15 13:58:40.112652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.585 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:45.585 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:45.585 13:58:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:45.585 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:45.585 13:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.585 13:58:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.585 13:58:40 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.99HwghmHBL 00:19:45.585 13:58:40 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.99HwghmHBL 00:19:45.585 13:58:40 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:45.842 [2024-07-15 13:58:40.472174] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:45.842 13:58:40 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:46.100 13:58:40 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:46.357 [2024-07-15 13:58:41.013624] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:46.357 [2024-07-15 13:58:41.013884] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.357 13:58:41 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:46.615 malloc0 00:19:46.615 13:58:41 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:46.872 13:58:41 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.99HwghmHBL 00:19:47.130 [2024-07-15 13:58:41.801784] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:47.130 13:58:41 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3779977 00:19:47.130 13:58:41 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:47.130 13:58:41 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:47.130 13:58:41 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3779977 /var/tmp/bdevperf.sock 00:19:47.130 13:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3779977 ']' 00:19:47.130 13:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:47.130 13:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:47.130 13:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:47.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:47.130 13:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:47.130 13:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.130 [2024-07-15 13:58:41.855136] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:19:47.130 [2024-07-15 13:58:41.855220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3779977 ] 00:19:47.130 EAL: No free 2048 kB hugepages reported on node 1 00:19:47.130 [2024-07-15 13:58:41.913520] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.387 [2024-07-15 13:58:42.021409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.387 13:58:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:47.387 13:58:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:47.387 13:58:42 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.99HwghmHBL 00:19:47.645 [2024-07-15 13:58:42.368748] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:47.645 [2024-07-15 13:58:42.368864] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:47.645 TLSTESTn1 00:19:47.645 13:58:42 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:48.209 13:58:42 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:19:48.209 "subsystems": [ 00:19:48.209 { 00:19:48.209 "subsystem": "keyring", 00:19:48.209 "config": [] 00:19:48.209 }, 00:19:48.209 { 00:19:48.209 "subsystem": "iobuf", 00:19:48.209 "config": [ 00:19:48.209 { 00:19:48.209 "method": "iobuf_set_options", 00:19:48.209 "params": { 00:19:48.209 "small_pool_count": 8192, 00:19:48.209 "large_pool_count": 1024, 00:19:48.209 "small_bufsize": 8192, 00:19:48.209 "large_bufsize": 135168 00:19:48.209 } 00:19:48.209 } 00:19:48.209 ] 00:19:48.209 }, 00:19:48.209 { 00:19:48.209 "subsystem": "sock", 00:19:48.209 "config": [ 00:19:48.209 { 00:19:48.209 "method": "sock_set_default_impl", 00:19:48.209 "params": { 00:19:48.209 "impl_name": "posix" 00:19:48.209 } 00:19:48.209 }, 00:19:48.209 { 00:19:48.209 "method": "sock_impl_set_options", 00:19:48.209 "params": { 00:19:48.209 "impl_name": "ssl", 00:19:48.209 "recv_buf_size": 4096, 00:19:48.209 "send_buf_size": 4096, 00:19:48.209 "enable_recv_pipe": true, 00:19:48.209 "enable_quickack": false, 00:19:48.209 "enable_placement_id": 0, 00:19:48.209 "enable_zerocopy_send_server": true, 00:19:48.209 "enable_zerocopy_send_client": false, 00:19:48.209 "zerocopy_threshold": 0, 00:19:48.209 "tls_version": 0, 00:19:48.209 "enable_ktls": false 00:19:48.209 } 00:19:48.209 }, 00:19:48.209 { 00:19:48.209 "method": "sock_impl_set_options", 00:19:48.209 "params": { 00:19:48.209 "impl_name": "posix", 00:19:48.209 "recv_buf_size": 2097152, 00:19:48.209 "send_buf_size": 2097152, 00:19:48.209 "enable_recv_pipe": true, 00:19:48.209 "enable_quickack": false, 00:19:48.209 "enable_placement_id": 0, 00:19:48.209 "enable_zerocopy_send_server": true, 00:19:48.209 "enable_zerocopy_send_client": false, 00:19:48.209 "zerocopy_threshold": 0, 00:19:48.209 "tls_version": 0, 00:19:48.209 "enable_ktls": false 00:19:48.209 } 00:19:48.209 } 00:19:48.209 ] 00:19:48.209 }, 00:19:48.209 { 00:19:48.209 "subsystem": "vmd", 00:19:48.209 "config": [] 00:19:48.209 }, 00:19:48.209 { 00:19:48.209 "subsystem": "accel", 00:19:48.209 "config": [ 00:19:48.209 { 00:19:48.209 "method": "accel_set_options", 00:19:48.209 "params": { 00:19:48.209 "small_cache_size": 128, 00:19:48.209 "large_cache_size": 16, 00:19:48.209 "task_count": 2048, 00:19:48.209 "sequence_count": 2048, 00:19:48.209 "buf_count": 2048 00:19:48.209 } 00:19:48.209 } 00:19:48.209 ] 00:19:48.209 }, 00:19:48.209 { 00:19:48.209 "subsystem": "bdev", 00:19:48.209 "config": [ 00:19:48.209 { 00:19:48.209 "method": "bdev_set_options", 00:19:48.209 "params": { 00:19:48.209 "bdev_io_pool_size": 65535, 00:19:48.209 "bdev_io_cache_size": 256, 00:19:48.209 "bdev_auto_examine": true, 00:19:48.209 "iobuf_small_cache_size": 128, 00:19:48.209 "iobuf_large_cache_size": 16 00:19:48.209 } 00:19:48.209 }, 00:19:48.209 { 00:19:48.209 "method": "bdev_raid_set_options", 00:19:48.209 "params": { 00:19:48.209 "process_window_size_kb": 1024 00:19:48.209 } 00:19:48.209 }, 00:19:48.209 { 00:19:48.209 "method": "bdev_iscsi_set_options", 00:19:48.209 "params": { 00:19:48.209 "timeout_sec": 30 00:19:48.209 } 00:19:48.209 }, 00:19:48.209 { 00:19:48.209 "method": "bdev_nvme_set_options", 00:19:48.209 "params": { 00:19:48.209 "action_on_timeout": "none", 00:19:48.209 "timeout_us": 0, 00:19:48.209 "timeout_admin_us": 0, 00:19:48.209 "keep_alive_timeout_ms": 10000, 00:19:48.209 "arbitration_burst": 0, 00:19:48.209 "low_priority_weight": 0, 00:19:48.209 "medium_priority_weight": 0, 00:19:48.209 "high_priority_weight": 0, 00:19:48.209 "nvme_adminq_poll_period_us": 10000, 00:19:48.209 "nvme_ioq_poll_period_us": 0, 00:19:48.209 "io_queue_requests": 0, 00:19:48.209 "delay_cmd_submit": true, 00:19:48.209 "transport_retry_count": 4, 00:19:48.209 "bdev_retry_count": 3, 00:19:48.209 "transport_ack_timeout": 0, 00:19:48.209 "ctrlr_loss_timeout_sec": 0, 00:19:48.209 "reconnect_delay_sec": 0, 00:19:48.209 "fast_io_fail_timeout_sec": 0, 00:19:48.209 "disable_auto_failback": false, 00:19:48.209 "generate_uuids": false, 00:19:48.210 "transport_tos": 0, 00:19:48.210 "nvme_error_stat": false, 00:19:48.210 "rdma_srq_size": 0, 00:19:48.210 "io_path_stat": false, 00:19:48.210 "allow_accel_sequence": false, 00:19:48.210 "rdma_max_cq_size": 0, 00:19:48.210 "rdma_cm_event_timeout_ms": 0, 00:19:48.210 "dhchap_digests": [ 00:19:48.210 "sha256", 00:19:48.210 "sha384", 00:19:48.210 "sha512" 00:19:48.210 ], 00:19:48.210 "dhchap_dhgroups": [ 00:19:48.210 "null", 00:19:48.210 "ffdhe2048", 00:19:48.210 "ffdhe3072", 00:19:48.210 "ffdhe4096", 00:19:48.210 "ffdhe6144", 00:19:48.210 "ffdhe8192" 00:19:48.210 ] 00:19:48.210 } 00:19:48.210 }, 00:19:48.210 { 00:19:48.210 "method": "bdev_nvme_set_hotplug", 00:19:48.210 "params": { 00:19:48.210 "period_us": 100000, 00:19:48.210 "enable": false 00:19:48.210 } 00:19:48.210 }, 00:19:48.210 { 00:19:48.210 "method": "bdev_malloc_create", 00:19:48.210 "params": { 00:19:48.210 "name": "malloc0", 00:19:48.210 "num_blocks": 8192, 00:19:48.210 "block_size": 4096, 00:19:48.210 "physical_block_size": 4096, 00:19:48.210 "uuid": "3230d7c3-7788-47c6-b91d-50e6451becc2", 00:19:48.210 "optimal_io_boundary": 0 00:19:48.210 } 00:19:48.210 }, 00:19:48.210 { 00:19:48.210 "method": "bdev_wait_for_examine" 00:19:48.210 } 00:19:48.210 ] 00:19:48.210 }, 00:19:48.210 { 00:19:48.210 "subsystem": "nbd", 00:19:48.210 "config": [] 00:19:48.210 }, 00:19:48.210 { 00:19:48.210 "subsystem": "scheduler", 00:19:48.210 "config": [ 00:19:48.210 { 00:19:48.210 "method": "framework_set_scheduler", 00:19:48.210 "params": { 00:19:48.210 "name": "static" 00:19:48.210 } 00:19:48.210 } 00:19:48.210 ] 00:19:48.210 }, 00:19:48.210 { 00:19:48.210 "subsystem": "nvmf", 00:19:48.210 "config": [ 00:19:48.210 { 00:19:48.210 "method": "nvmf_set_config", 00:19:48.210 "params": { 00:19:48.210 "discovery_filter": "match_any", 00:19:48.210 "admin_cmd_passthru": { 00:19:48.210 "identify_ctrlr": false 00:19:48.210 } 00:19:48.210 } 00:19:48.210 }, 00:19:48.210 { 00:19:48.210 "method": "nvmf_set_max_subsystems", 00:19:48.210 "params": { 00:19:48.210 "max_subsystems": 1024 00:19:48.210 } 00:19:48.210 }, 00:19:48.210 { 00:19:48.210 "method": "nvmf_set_crdt", 00:19:48.210 "params": { 00:19:48.210 "crdt1": 0, 00:19:48.210 "crdt2": 0, 00:19:48.210 "crdt3": 0 00:19:48.210 } 00:19:48.210 }, 00:19:48.210 { 00:19:48.210 "method": "nvmf_create_transport", 00:19:48.210 "params": { 00:19:48.210 "trtype": "TCP", 00:19:48.210 "max_queue_depth": 128, 00:19:48.210 "max_io_qpairs_per_ctrlr": 127, 00:19:48.210 "in_capsule_data_size": 4096, 00:19:48.210 "max_io_size": 131072, 00:19:48.210 "io_unit_size": 131072, 00:19:48.210 "max_aq_depth": 128, 00:19:48.210 "num_shared_buffers": 511, 00:19:48.210 "buf_cache_size": 4294967295, 00:19:48.210 "dif_insert_or_strip": false, 00:19:48.210 "zcopy": false, 00:19:48.210 "c2h_success": false, 00:19:48.210 "sock_priority": 0, 00:19:48.210 "abort_timeout_sec": 1, 00:19:48.210 "ack_timeout": 0, 00:19:48.210 "data_wr_pool_size": 0 00:19:48.210 } 00:19:48.210 }, 00:19:48.210 { 00:19:48.210 "method": "nvmf_create_subsystem", 00:19:48.210 "params": { 00:19:48.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.210 "allow_any_host": false, 00:19:48.210 "serial_number": "SPDK00000000000001", 00:19:48.210 "model_number": "SPDK bdev Controller", 00:19:48.210 "max_namespaces": 10, 00:19:48.210 "min_cntlid": 1, 00:19:48.210 "max_cntlid": 65519, 00:19:48.210 "ana_reporting": false 00:19:48.210 } 00:19:48.210 }, 00:19:48.210 { 00:19:48.210 "method": "nvmf_subsystem_add_host", 00:19:48.210 "params": { 00:19:48.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.210 "host": "nqn.2016-06.io.spdk:host1", 00:19:48.210 "psk": "/tmp/tmp.99HwghmHBL" 00:19:48.210 } 00:19:48.210 }, 00:19:48.210 { 00:19:48.210 "method": "nvmf_subsystem_add_ns", 00:19:48.210 "params": { 00:19:48.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.210 "namespace": { 00:19:48.210 "nsid": 1, 00:19:48.210 "bdev_name": "malloc0", 00:19:48.210 "nguid": "3230D7C3778847C6B91D50E6451BECC2", 00:19:48.210 "uuid": "3230d7c3-7788-47c6-b91d-50e6451becc2", 00:19:48.210 "no_auto_visible": false 00:19:48.210 } 00:19:48.210 } 00:19:48.210 }, 00:19:48.210 { 00:19:48.210 "method": "nvmf_subsystem_add_listener", 00:19:48.210 "params": { 00:19:48.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.210 "listen_address": { 00:19:48.210 "trtype": "TCP", 00:19:48.210 "adrfam": "IPv4", 00:19:48.210 "traddr": "10.0.0.2", 00:19:48.210 "trsvcid": "4420" 00:19:48.210 }, 00:19:48.210 "secure_channel": true 00:19:48.210 } 00:19:48.210 } 00:19:48.210 ] 00:19:48.210 } 00:19:48.210 ] 00:19:48.210 }' 00:19:48.210 13:58:42 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:48.469 13:58:43 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:48.469 "subsystems": [ 00:19:48.469 { 00:19:48.469 "subsystem": "keyring", 00:19:48.469 "config": [] 00:19:48.469 }, 00:19:48.469 { 00:19:48.469 "subsystem": "iobuf", 00:19:48.469 "config": [ 00:19:48.469 { 00:19:48.469 "method": "iobuf_set_options", 00:19:48.469 "params": { 00:19:48.469 "small_pool_count": 8192, 00:19:48.469 "large_pool_count": 1024, 00:19:48.469 "small_bufsize": 8192, 00:19:48.469 "large_bufsize": 135168 00:19:48.469 } 00:19:48.469 } 00:19:48.469 ] 00:19:48.469 }, 00:19:48.469 { 00:19:48.469 "subsystem": "sock", 00:19:48.469 "config": [ 00:19:48.469 { 00:19:48.469 "method": "sock_set_default_impl", 00:19:48.469 "params": { 00:19:48.469 "impl_name": "posix" 00:19:48.469 } 00:19:48.469 }, 00:19:48.469 { 00:19:48.469 "method": "sock_impl_set_options", 00:19:48.469 "params": { 00:19:48.469 "impl_name": "ssl", 00:19:48.469 "recv_buf_size": 4096, 00:19:48.469 "send_buf_size": 4096, 00:19:48.469 "enable_recv_pipe": true, 00:19:48.469 "enable_quickack": false, 00:19:48.469 "enable_placement_id": 0, 00:19:48.469 "enable_zerocopy_send_server": true, 00:19:48.470 "enable_zerocopy_send_client": false, 00:19:48.470 "zerocopy_threshold": 0, 00:19:48.470 "tls_version": 0, 00:19:48.470 "enable_ktls": false 00:19:48.470 } 00:19:48.470 }, 00:19:48.470 { 00:19:48.470 "method": "sock_impl_set_options", 00:19:48.470 "params": { 00:19:48.470 "impl_name": "posix", 00:19:48.470 "recv_buf_size": 2097152, 00:19:48.470 "send_buf_size": 2097152, 00:19:48.470 "enable_recv_pipe": true, 00:19:48.470 "enable_quickack": false, 00:19:48.470 "enable_placement_id": 0, 00:19:48.470 "enable_zerocopy_send_server": true, 00:19:48.470 "enable_zerocopy_send_client": false, 00:19:48.470 "zerocopy_threshold": 0, 00:19:48.470 "tls_version": 0, 00:19:48.470 "enable_ktls": false 00:19:48.470 } 00:19:48.470 } 00:19:48.470 ] 00:19:48.470 }, 00:19:48.470 { 00:19:48.470 "subsystem": "vmd", 00:19:48.470 "config": [] 00:19:48.470 }, 00:19:48.470 { 00:19:48.470 "subsystem": "accel", 00:19:48.470 "config": [ 00:19:48.470 { 00:19:48.470 "method": "accel_set_options", 00:19:48.470 "params": { 00:19:48.470 "small_cache_size": 128, 00:19:48.470 "large_cache_size": 16, 00:19:48.470 "task_count": 2048, 00:19:48.470 "sequence_count": 2048, 00:19:48.470 "buf_count": 2048 00:19:48.470 } 00:19:48.470 } 00:19:48.470 ] 00:19:48.470 }, 00:19:48.470 { 00:19:48.470 "subsystem": "bdev", 00:19:48.470 "config": [ 00:19:48.470 { 00:19:48.470 "method": "bdev_set_options", 00:19:48.470 "params": { 00:19:48.470 "bdev_io_pool_size": 65535, 00:19:48.470 "bdev_io_cache_size": 256, 00:19:48.470 "bdev_auto_examine": true, 00:19:48.470 "iobuf_small_cache_size": 128, 00:19:48.470 "iobuf_large_cache_size": 16 00:19:48.470 } 00:19:48.470 }, 00:19:48.470 { 00:19:48.470 "method": "bdev_raid_set_options", 00:19:48.470 "params": { 00:19:48.470 "process_window_size_kb": 1024 00:19:48.470 } 00:19:48.470 }, 00:19:48.470 { 00:19:48.470 "method": "bdev_iscsi_set_options", 00:19:48.470 "params": { 00:19:48.470 "timeout_sec": 30 00:19:48.470 } 00:19:48.470 }, 00:19:48.470 { 00:19:48.470 "method": "bdev_nvme_set_options", 00:19:48.470 "params": { 00:19:48.470 "action_on_timeout": "none", 00:19:48.470 "timeout_us": 0, 00:19:48.470 "timeout_admin_us": 0, 00:19:48.470 "keep_alive_timeout_ms": 10000, 00:19:48.470 "arbitration_burst": 0, 00:19:48.470 "low_priority_weight": 0, 00:19:48.470 "medium_priority_weight": 0, 00:19:48.470 "high_priority_weight": 0, 00:19:48.470 "nvme_adminq_poll_period_us": 10000, 00:19:48.470 "nvme_ioq_poll_period_us": 0, 00:19:48.470 "io_queue_requests": 512, 00:19:48.470 "delay_cmd_submit": true, 00:19:48.470 "transport_retry_count": 4, 00:19:48.470 "bdev_retry_count": 3, 00:19:48.470 "transport_ack_timeout": 0, 00:19:48.470 "ctrlr_loss_timeout_sec": 0, 00:19:48.470 "reconnect_delay_sec": 0, 00:19:48.470 "fast_io_fail_timeout_sec": 0, 00:19:48.470 "disable_auto_failback": false, 00:19:48.470 "generate_uuids": false, 00:19:48.470 "transport_tos": 0, 00:19:48.470 "nvme_error_stat": false, 00:19:48.470 "rdma_srq_size": 0, 00:19:48.470 "io_path_stat": false, 00:19:48.470 "allow_accel_sequence": false, 00:19:48.470 "rdma_max_cq_size": 0, 00:19:48.470 "rdma_cm_event_timeout_ms": 0, 00:19:48.470 "dhchap_digests": [ 00:19:48.470 "sha256", 00:19:48.470 "sha384", 00:19:48.470 "sha512" 00:19:48.470 ], 00:19:48.470 "dhchap_dhgroups": [ 00:19:48.470 "null", 00:19:48.470 "ffdhe2048", 00:19:48.470 "ffdhe3072", 00:19:48.470 "ffdhe4096", 00:19:48.470 "ffdhe6144", 00:19:48.470 "ffdhe8192" 00:19:48.470 ] 00:19:48.470 } 00:19:48.470 }, 00:19:48.470 { 00:19:48.470 "method": "bdev_nvme_attach_controller", 00:19:48.470 "params": { 00:19:48.470 "name": "TLSTEST", 00:19:48.470 "trtype": "TCP", 00:19:48.470 "adrfam": "IPv4", 00:19:48.470 "traddr": "10.0.0.2", 00:19:48.470 "trsvcid": "4420", 00:19:48.470 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.470 "prchk_reftag": false, 00:19:48.470 "prchk_guard": false, 00:19:48.470 "ctrlr_loss_timeout_sec": 0, 00:19:48.470 "reconnect_delay_sec": 0, 00:19:48.470 "fast_io_fail_timeout_sec": 0, 00:19:48.470 "psk": "/tmp/tmp.99HwghmHBL", 00:19:48.470 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:48.470 "hdgst": false, 00:19:48.470 "ddgst": false 00:19:48.470 } 00:19:48.470 }, 00:19:48.470 { 00:19:48.470 "method": "bdev_nvme_set_hotplug", 00:19:48.470 "params": { 00:19:48.470 "period_us": 100000, 00:19:48.470 "enable": false 00:19:48.470 } 00:19:48.470 }, 00:19:48.470 { 00:19:48.470 "method": "bdev_wait_for_examine" 00:19:48.470 } 00:19:48.470 ] 00:19:48.470 }, 00:19:48.470 { 00:19:48.470 "subsystem": "nbd", 00:19:48.470 "config": [] 00:19:48.470 } 00:19:48.470 ] 00:19:48.470 }' 00:19:48.470 13:58:43 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3779977 00:19:48.470 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3779977 ']' 00:19:48.470 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3779977 00:19:48.470 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:48.470 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:48.470 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3779977 00:19:48.470 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:48.470 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:48.470 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3779977' 00:19:48.470 killing process with pid 3779977 00:19:48.470 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3779977 00:19:48.470 Received shutdown signal, test time was about 10.000000 seconds 00:19:48.470 00:19:48.470 Latency(us) 00:19:48.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.470 =================================================================================================================== 00:19:48.470 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:48.470 [2024-07-15 13:58:43.153275] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:48.470 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3779977 00:19:48.727 13:58:43 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3779807 00:19:48.727 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3779807 ']' 00:19:48.727 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3779807 00:19:48.727 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:48.727 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:48.727 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3779807 00:19:48.727 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:48.727 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:48.727 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3779807' 00:19:48.727 killing process with pid 3779807 00:19:48.727 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3779807 00:19:48.727 [2024-07-15 13:58:43.444176] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:48.727 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3779807 00:19:48.985 13:58:43 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:48.985 13:58:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:48.985 13:58:43 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:48.986 "subsystems": [ 00:19:48.986 { 00:19:48.986 "subsystem": "keyring", 00:19:48.986 "config": [] 00:19:48.986 }, 00:19:48.986 { 00:19:48.986 "subsystem": "iobuf", 00:19:48.986 "config": [ 00:19:48.986 { 00:19:48.986 "method": "iobuf_set_options", 00:19:48.986 "params": { 00:19:48.986 "small_pool_count": 8192, 00:19:48.986 "large_pool_count": 1024, 00:19:48.986 "small_bufsize": 8192, 00:19:48.986 "large_bufsize": 135168 00:19:48.986 } 00:19:48.986 } 00:19:48.986 ] 00:19:48.986 }, 00:19:48.986 { 00:19:48.986 "subsystem": "sock", 00:19:48.986 "config": [ 00:19:48.986 { 00:19:48.986 "method": "sock_set_default_impl", 00:19:48.986 "params": { 00:19:48.986 "impl_name": "posix" 00:19:48.986 } 00:19:48.986 }, 00:19:48.986 { 00:19:48.986 "method": "sock_impl_set_options", 00:19:48.986 "params": { 00:19:48.986 "impl_name": "ssl", 00:19:48.986 "recv_buf_size": 4096, 00:19:48.986 "send_buf_size": 4096, 00:19:48.986 "enable_recv_pipe": true, 00:19:48.986 "enable_quickack": false, 00:19:48.986 "enable_placement_id": 0, 00:19:48.986 "enable_zerocopy_send_server": true, 00:19:48.986 "enable_zerocopy_send_client": false, 00:19:48.986 "zerocopy_threshold": 0, 00:19:48.986 "tls_version": 0, 00:19:48.986 "enable_ktls": false 00:19:48.986 } 00:19:48.986 }, 00:19:48.986 { 00:19:48.986 "method": "sock_impl_set_options", 00:19:48.986 "params": { 00:19:48.986 "impl_name": "posix", 00:19:48.986 "recv_buf_size": 2097152, 00:19:48.986 "send_buf_size": 2097152, 00:19:48.986 "enable_recv_pipe": true, 00:19:48.986 "enable_quickack": false, 00:19:48.986 "enable_placement_id": 0, 00:19:48.986 "enable_zerocopy_send_server": true, 00:19:48.986 "enable_zerocopy_send_client": false, 00:19:48.986 "zerocopy_threshold": 0, 00:19:48.986 "tls_version": 0, 00:19:48.986 "enable_ktls": false 00:19:48.986 } 00:19:48.986 } 00:19:48.986 ] 00:19:48.986 }, 00:19:48.986 { 00:19:48.986 "subsystem": "vmd", 00:19:48.986 "config": [] 00:19:48.986 }, 00:19:48.986 { 00:19:48.986 "subsystem": "accel", 00:19:48.986 "config": [ 00:19:48.986 { 00:19:48.986 "method": "accel_set_options", 00:19:48.986 "params": { 00:19:48.986 "small_cache_size": 128, 00:19:48.986 "large_cache_size": 16, 00:19:48.986 "task_count": 2048, 00:19:48.986 "sequence_count": 2048, 00:19:48.986 "buf_count": 2048 00:19:48.986 } 00:19:48.986 } 00:19:48.986 ] 00:19:48.986 }, 00:19:48.986 { 00:19:48.986 "subsystem": "bdev", 00:19:48.986 "config": [ 00:19:48.986 { 00:19:48.986 "method": "bdev_set_options", 00:19:48.986 "params": { 00:19:48.986 "bdev_io_pool_size": 65535, 00:19:48.986 "bdev_io_cache_size": 256, 00:19:48.986 "bdev_auto_examine": true, 00:19:48.986 "iobuf_small_cache_size": 128, 00:19:48.986 "iobuf_large_cache_size": 16 00:19:48.986 } 00:19:48.986 }, 00:19:48.986 { 00:19:48.986 "method": "bdev_raid_set_options", 00:19:48.986 "params": { 00:19:48.986 "process_window_size_kb": 1024 00:19:48.986 } 00:19:48.986 }, 00:19:48.986 { 00:19:48.986 "method": "bdev_iscsi_set_options", 00:19:48.986 "params": { 00:19:48.986 "timeout_sec": 30 00:19:48.986 } 00:19:48.986 }, 00:19:48.986 { 00:19:48.986 "method": "bdev_nvme_set_options", 00:19:48.986 "params": { 00:19:48.986 "action_on_timeout": "none", 00:19:48.986 "timeout_us": 0, 00:19:48.986 "timeout_admin_us": 0, 00:19:48.986 "keep_alive_timeout_ms": 10000, 00:19:48.986 "arbitration_burst": 0, 00:19:48.986 "low_priority_weight": 0, 00:19:48.986 "medium_priority_weight": 0, 00:19:48.986 "high_priority_weight": 0, 00:19:48.986 "nvme_adminq_poll_period_us": 10000, 00:19:48.986 "nvme_ioq_poll_period_us": 0, 00:19:48.986 "io_queue_requests": 0, 00:19:48.986 "delay_cmd_submit": true, 00:19:48.986 "transport_retry_count": 4, 00:19:48.986 "bdev_retry_count": 3, 00:19:48.986 "transport_ack_timeout": 0, 00:19:48.986 "ctrlr_loss_timeout_sec": 0, 00:19:48.986 "reconnect_delay_sec": 0, 00:19:48.986 "fast_io_fail_timeout_sec": 0, 00:19:48.986 "disable_auto_failback": false, 00:19:48.986 "generate_uuids": false, 00:19:48.986 "transport_tos": 0, 00:19:48.986 "nvme_error_stat": false, 00:19:48.986 "rdma_srq_size": 0, 00:19:48.986 "io_path_stat": false, 00:19:48.986 "allow_accel_sequence": false, 00:19:48.986 "rdma_max_cq_size": 0, 00:19:48.986 "rdma_cm_event_timeout_ms": 0, 00:19:48.986 "dhchap_digests": [ 00:19:48.986 "sha256", 00:19:48.986 "sha384", 00:19:48.986 "sha512" 00:19:48.986 ], 00:19:48.986 "dhchap_dhgroups": [ 00:19:48.986 "null", 00:19:48.986 "ffdhe2048", 00:19:48.986 "ffdhe3072", 00:19:48.986 "ffdhe4096", 00:19:48.986 "ffdhe6144", 00:19:48.986 "ffdhe8192" 00:19:48.986 ] 00:19:48.986 } 00:19:48.986 }, 00:19:48.986 { 00:19:48.986 "method": "bdev_nvme_set_hotplug", 00:19:48.986 "params": { 00:19:48.986 "period_us": 100000, 00:19:48.986 "enable": false 00:19:48.986 } 00:19:48.986 }, 00:19:48.986 { 00:19:48.986 "method": "bdev_malloc_create", 00:19:48.986 "params": { 00:19:48.986 "name": "malloc0", 00:19:48.986 "num_blocks": 8192, 00:19:48.986 "block_size": 4096, 00:19:48.986 "physical_block_size": 4096, 00:19:48.986 "uuid": "3230d7c3-7788-47c6-b91d-50e6451becc2", 00:19:48.986 "optimal_io_boundary": 0 00:19:48.986 } 00:19:48.986 }, 00:19:48.986 { 00:19:48.986 "method": "bdev_wait_for_examine" 00:19:48.986 } 00:19:48.986 ] 00:19:48.986 }, 00:19:48.986 { 00:19:48.986 "subsystem": "nbd", 00:19:48.986 "config": [] 00:19:48.986 }, 00:19:48.986 { 00:19:48.986 "subsystem": "scheduler", 00:19:48.986 "config": [ 00:19:48.986 { 00:19:48.986 "method": "framework_set_scheduler", 00:19:48.986 "params": { 00:19:48.986 "name": "static" 00:19:48.986 } 00:19:48.986 } 00:19:48.986 ] 00:19:48.986 }, 00:19:48.986 { 00:19:48.986 "subsystem": "nvmf", 00:19:48.986 "config": [ 00:19:48.986 { 00:19:48.986 "method": "nvmf_set_config", 00:19:48.986 "params": { 00:19:48.986 "discovery_filter": "match_any", 00:19:48.986 "admin_cmd_passthru": { 00:19:48.986 "identify_ctrlr": false 00:19:48.986 } 00:19:48.986 } 00:19:48.986 }, 00:19:48.986 { 00:19:48.986 "method": "nvmf_set_max_subsystems", 00:19:48.986 "params": { 00:19:48.986 "max_subsystems": 1024 00:19:48.986 } 00:19:48.986 }, 00:19:48.986 { 00:19:48.986 "method": "nvmf_set_crdt", 00:19:48.986 "params": { 00:19:48.986 "crdt1": 0, 00:19:48.986 "crdt2": 0, 00:19:48.986 "crdt3": 0 00:19:48.986 } 00:19:48.986 }, 00:19:48.986 { 00:19:48.986 "method": "nvmf_create_transport", 00:19:48.986 "params": { 00:19:48.986 "trtype": "TCP", 00:19:48.986 "max_queue_depth": 128, 00:19:48.986 "max_io_qpairs_per_ctrlr": 127, 00:19:48.986 "in_capsule_data_size": 4096, 00:19:48.986 "max_io_size": 131072, 00:19:48.986 "io_unit_size": 131072, 00:19:48.986 "max_aq_depth": 128, 00:19:48.986 "num_shared_buffers": 511, 00:19:48.986 "buf_cache_size": 4294967295, 00:19:48.986 "dif_insert_or_strip": false, 00:19:48.986 "zcopy": false, 00:19:48.986 "c2h_success": false, 00:19:48.986 "sock_priority": 0, 00:19:48.986 "abort_timeout_sec": 1, 00:19:48.986 "ack_timeout": 0, 00:19:48.986 "data_wr_pool_size": 0 00:19:48.986 } 00:19:48.986 }, 00:19:48.986 { 00:19:48.986 "method": "nvmf_create_subsystem", 00:19:48.986 "params": { 00:19:48.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.986 "allow_any_host": false, 00:19:48.986 "serial_number": "SPDK00000000000001", 00:19:48.986 "model_number": "SPDK bdev Controller", 00:19:48.986 "max_namespaces": 10, 00:19:48.986 "min_cntlid": 1, 00:19:48.986 "max_cntlid": 65519, 00:19:48.986 "ana_reporting": false 00:19:48.986 } 00:19:48.986 }, 00:19:48.986 { 00:19:48.986 "method": "nvmf_subsystem_add_host", 00:19:48.986 "params": { 00:19:48.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.986 "host": "nqn.2016-06.io.spdk:host1", 00:19:48.986 "psk": "/tmp/tmp.99HwghmHBL" 00:19:48.986 } 00:19:48.986 }, 00:19:48.986 { 00:19:48.986 "method": "nvmf_subsystem_add_ns", 00:19:48.987 "params": { 00:19:48.987 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.987 "namespace": { 00:19:48.987 "nsid": 1, 00:19:48.987 "bdev_name": "malloc0", 00:19:48.987 "nguid": "3230D7C3778847C6B91D50E6451BECC2", 00:19:48.987 "uuid": "3230d7c3-7788-47c6-b91d-50e6451becc2", 00:19:48.987 "no_auto_visible": false 00:19:48.987 } 00:19:48.987 } 00:19:48.987 }, 00:19:48.987 { 00:19:48.987 "method": "nvmf_subsystem_add_listener", 00:19:48.987 "params": { 00:19:48.987 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.987 "listen_address": { 00:19:48.987 "trtype": "TCP", 00:19:48.987 "adrfam": "IPv4", 00:19:48.987 "traddr": "10.0.0.2", 00:19:48.987 "trsvcid": "4420" 00:19:48.987 }, 00:19:48.987 "secure_channel": true 00:19:48.987 } 00:19:48.987 } 00:19:48.987 ] 00:19:48.987 } 00:19:48.987 ] 00:19:48.987 }' 00:19:48.987 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:48.987 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.987 13:58:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3780252 00:19:48.987 13:58:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:48.987 13:58:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3780252 00:19:48.987 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3780252 ']' 00:19:48.987 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.987 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:48.987 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.987 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:48.987 13:58:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.987 [2024-07-15 13:58:43.771929] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:19:48.987 [2024-07-15 13:58:43.772019] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.987 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.244 [2024-07-15 13:58:43.835766] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.244 [2024-07-15 13:58:43.939355] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.244 [2024-07-15 13:58:43.939415] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.244 [2024-07-15 13:58:43.939442] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:49.244 [2024-07-15 13:58:43.939453] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:49.244 [2024-07-15 13:58:43.939463] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.244 [2024-07-15 13:58:43.939548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.502 [2024-07-15 13:58:44.156041] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.502 [2024-07-15 13:58:44.172009] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:49.502 [2024-07-15 13:58:44.188065] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:49.502 [2024-07-15 13:58:44.198920] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:50.067 13:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:50.067 13:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:50.067 13:58:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:50.067 13:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:50.067 13:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.067 13:58:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.067 13:58:44 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3780404 00:19:50.067 13:58:44 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3780404 /var/tmp/bdevperf.sock 00:19:50.067 13:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3780404 ']' 00:19:50.067 13:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.067 13:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:50.067 13:58:44 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:50.067 13:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.067 13:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:50.067 13:58:44 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:50.067 "subsystems": [ 00:19:50.067 { 00:19:50.067 "subsystem": "keyring", 00:19:50.067 "config": [] 00:19:50.067 }, 00:19:50.067 { 00:19:50.067 "subsystem": "iobuf", 00:19:50.067 "config": [ 00:19:50.067 { 00:19:50.067 "method": "iobuf_set_options", 00:19:50.067 "params": { 00:19:50.067 "small_pool_count": 8192, 00:19:50.067 "large_pool_count": 1024, 00:19:50.067 "small_bufsize": 8192, 00:19:50.067 "large_bufsize": 135168 00:19:50.067 } 00:19:50.067 } 00:19:50.067 ] 00:19:50.067 }, 00:19:50.067 { 00:19:50.067 "subsystem": "sock", 00:19:50.067 "config": [ 00:19:50.067 { 00:19:50.067 "method": "sock_set_default_impl", 00:19:50.067 "params": { 00:19:50.067 "impl_name": "posix" 00:19:50.067 } 00:19:50.067 }, 00:19:50.067 { 00:19:50.067 "method": "sock_impl_set_options", 00:19:50.067 "params": { 00:19:50.067 "impl_name": "ssl", 00:19:50.067 "recv_buf_size": 4096, 00:19:50.067 "send_buf_size": 4096, 00:19:50.067 "enable_recv_pipe": true, 00:19:50.067 "enable_quickack": false, 00:19:50.067 "enable_placement_id": 0, 00:19:50.067 "enable_zerocopy_send_server": true, 00:19:50.067 "enable_zerocopy_send_client": false, 00:19:50.067 "zerocopy_threshold": 0, 00:19:50.067 "tls_version": 0, 00:19:50.067 "enable_ktls": false 00:19:50.067 } 00:19:50.067 }, 00:19:50.067 { 00:19:50.067 "method": "sock_impl_set_options", 00:19:50.067 "params": { 00:19:50.067 "impl_name": "posix", 00:19:50.067 "recv_buf_size": 2097152, 00:19:50.067 "send_buf_size": 2097152, 00:19:50.067 "enable_recv_pipe": true, 00:19:50.067 "enable_quickack": false, 00:19:50.067 "enable_placement_id": 0, 00:19:50.067 "enable_zerocopy_send_server": true, 00:19:50.067 "enable_zerocopy_send_client": false, 00:19:50.067 "zerocopy_threshold": 0, 00:19:50.067 "tls_version": 0, 00:19:50.067 "enable_ktls": false 00:19:50.067 } 00:19:50.067 } 00:19:50.067 ] 00:19:50.067 }, 00:19:50.067 { 00:19:50.067 "subsystem": "vmd", 00:19:50.067 "config": [] 00:19:50.067 }, 00:19:50.067 { 00:19:50.067 "subsystem": "accel", 00:19:50.067 "config": [ 00:19:50.067 { 00:19:50.067 "method": "accel_set_options", 00:19:50.067 "params": { 00:19:50.067 "small_cache_size": 128, 00:19:50.067 "large_cache_size": 16, 00:19:50.067 "task_count": 2048, 00:19:50.067 "sequence_count": 2048, 00:19:50.067 "buf_count": 2048 00:19:50.067 } 00:19:50.067 } 00:19:50.067 ] 00:19:50.067 }, 00:19:50.067 { 00:19:50.067 "subsystem": "bdev", 00:19:50.067 "config": [ 00:19:50.067 { 00:19:50.067 "method": "bdev_set_options", 00:19:50.067 "params": { 00:19:50.067 "bdev_io_pool_size": 65535, 00:19:50.067 "bdev_io_cache_size": 256, 00:19:50.068 "bdev_auto_examine": true, 00:19:50.068 "iobuf_small_cache_size": 128, 00:19:50.068 "iobuf_large_cache_size": 16 00:19:50.068 } 00:19:50.068 }, 00:19:50.068 { 00:19:50.068 "method": "bdev_raid_set_options", 00:19:50.068 "params": { 00:19:50.068 "process_window_size_kb": 1024 00:19:50.068 } 00:19:50.068 }, 00:19:50.068 { 00:19:50.068 "method": "bdev_iscsi_set_options", 00:19:50.068 "params": { 00:19:50.068 "timeout_sec": 30 00:19:50.068 } 00:19:50.068 }, 00:19:50.068 { 00:19:50.068 "method": "bdev_nvme_set_options", 00:19:50.068 "params": { 00:19:50.068 "action_on_timeout": "none", 00:19:50.068 "timeout_us": 0, 00:19:50.068 "timeout_admin_us": 0, 00:19:50.068 "keep_alive_timeout_ms": 10000, 00:19:50.068 "arbitration_burst": 0, 00:19:50.068 "low_priority_weight": 0, 00:19:50.068 "medium_priority_weight": 0, 00:19:50.068 "high_priority_weight": 0, 00:19:50.068 "nvme_adminq_poll_period_us": 10000, 00:19:50.068 "nvme_ioq_poll_period_us": 0, 00:19:50.068 "io_queue_requests": 512, 00:19:50.068 "delay_cmd_submit": true, 00:19:50.068 "transport_retry_count": 4, 00:19:50.068 "bdev_retry_count": 3, 00:19:50.068 "transport_ack_timeout": 0, 00:19:50.068 "ctrlr_loss_timeout_sec": 0, 00:19:50.068 "reconnect_delay_sec": 0, 00:19:50.068 "fast_io_fail_timeout_sec": 0, 00:19:50.068 "disable_auto_failback": false, 00:19:50.068 "generate_uuids": false, 00:19:50.068 "transport_tos": 0, 00:19:50.068 "nvme_error_stat": false, 00:19:50.068 "rdma_srq_size": 0, 00:19:50.068 "io_path_stat": false, 00:19:50.068 "allow_accel_sequence": false, 00:19:50.068 "rdma_max_cq_size": 0, 00:19:50.068 "rdma_cm_event_timeout_ms": 0, 00:19:50.068 "dhchap_digests": [ 00:19:50.068 "sha256", 00:19:50.068 "sha384", 00:19:50.068 "sha512" 00:19:50.068 ], 00:19:50.068 "dhchap_dhgroups": [ 00:19:50.068 "null", 00:19:50.068 "ffdhe2048", 00:19:50.068 "ffdhe3072", 00:19:50.068 "ffdhe4096", 00:19:50.068 "ffdhe6144", 00:19:50.068 "ffdhe8192" 00:19:50.068 ] 00:19:50.068 } 00:19:50.068 }, 00:19:50.068 { 00:19:50.068 "method": "bdev_nvme_attach_controller", 00:19:50.068 "params": { 00:19:50.068 "name": "TLSTEST", 00:19:50.068 "trtype": "TCP", 00:19:50.068 "adrfam": "IPv4", 00:19:50.068 "traddr": "10.0.0.2", 00:19:50.068 "trsvcid": "4420", 00:19:50.068 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.068 "prchk_reftag": false, 00:19:50.068 "prchk_guard": false, 00:19:50.068 "ctrlr_loss_timeout_sec": 0, 00:19:50.068 "reconnect_delay_sec": 0, 00:19:50.068 "fast_io_fail_timeout_sec": 0, 00:19:50.068 "psk": "/tmp/tmp.99HwghmHBL", 00:19:50.068 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:50.068 "hdgst": false, 00:19:50.068 "ddgst": false 00:19:50.068 } 00:19:50.068 }, 00:19:50.068 { 00:19:50.068 "method": "bdev_nvme_set_hotplug", 00:19:50.068 "params": { 00:19:50.068 "period_us": 100000, 00:19:50.068 "enable": false 00:19:50.068 } 00:19:50.068 }, 00:19:50.068 { 00:19:50.068 "method": "bdev_wait_for_examine" 00:19:50.068 } 00:19:50.068 ] 00:19:50.068 }, 00:19:50.068 { 00:19:50.068 "subsystem": "nbd", 00:19:50.068 "config": [] 00:19:50.068 } 00:19:50.068 ] 00:19:50.068 }' 00:19:50.068 13:58:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.068 [2024-07-15 13:58:44.797875] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:19:50.068 [2024-07-15 13:58:44.797951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3780404 ] 00:19:50.068 EAL: No free 2048 kB hugepages reported on node 1 00:19:50.068 [2024-07-15 13:58:44.857795] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.325 [2024-07-15 13:58:44.973541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.325 [2024-07-15 13:58:45.144301] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:50.325 [2024-07-15 13:58:45.144439] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:51.257 13:58:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:51.257 13:58:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:51.257 13:58:45 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:51.257 Running I/O for 10 seconds... 00:20:01.222 00:20:01.222 Latency(us) 00:20:01.222 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.222 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:01.222 Verification LBA range: start 0x0 length 0x2000 00:20:01.222 TLSTESTn1 : 10.02 3163.20 12.36 0.00 0.00 40402.33 9951.76 45632.47 00:20:01.222 =================================================================================================================== 00:20:01.222 Total : 3163.20 12.36 0.00 0.00 40402.33 9951.76 45632.47 00:20:01.222 0 00:20:01.222 13:58:55 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:01.222 13:58:55 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3780404 00:20:01.222 13:58:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3780404 ']' 00:20:01.222 13:58:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3780404 00:20:01.222 13:58:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:01.222 13:58:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:01.222 13:58:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3780404 00:20:01.222 13:58:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:01.222 13:58:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:01.222 13:58:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3780404' 00:20:01.222 killing process with pid 3780404 00:20:01.222 13:58:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3780404 00:20:01.222 Received shutdown signal, test time was about 10.000000 seconds 00:20:01.222 00:20:01.222 Latency(us) 00:20:01.222 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.222 =================================================================================================================== 00:20:01.222 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:01.222 [2024-07-15 13:58:55.932237] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:01.222 13:58:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3780404 00:20:01.480 13:58:56 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3780252 00:20:01.480 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3780252 ']' 00:20:01.480 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3780252 00:20:01.480 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:01.480 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:01.480 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3780252 00:20:01.480 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:01.480 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:01.480 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3780252' 00:20:01.480 killing process with pid 3780252 00:20:01.480 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3780252 00:20:01.480 [2024-07-15 13:58:56.196388] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:01.480 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3780252 00:20:01.738 13:58:56 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:20:01.738 13:58:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:01.738 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:01.738 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.738 13:58:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3781738 00:20:01.738 13:58:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:01.738 13:58:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3781738 00:20:01.738 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3781738 ']' 00:20:01.738 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.738 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:01.738 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.738 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:01.738 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.738 [2024-07-15 13:58:56.511959] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:20:01.739 [2024-07-15 13:58:56.512050] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.739 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.996 [2024-07-15 13:58:56.593664] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.996 [2024-07-15 13:58:56.730361] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.996 [2024-07-15 13:58:56.730430] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.996 [2024-07-15 13:58:56.730472] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.996 [2024-07-15 13:58:56.730494] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.996 [2024-07-15 13:58:56.730514] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.996 [2024-07-15 13:58:56.730553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.254 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:02.254 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:02.254 13:58:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:02.254 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:02.254 13:58:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.254 13:58:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.254 13:58:56 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.99HwghmHBL 00:20:02.254 13:58:56 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.99HwghmHBL 00:20:02.254 13:58:56 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:02.512 [2024-07-15 13:58:57.146999] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.512 13:58:57 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:02.770 13:58:57 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:03.027 [2024-07-15 13:58:57.676456] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:03.027 [2024-07-15 13:58:57.676668] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.027 13:58:57 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:03.284 malloc0 00:20:03.284 13:58:57 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:03.541 13:58:58 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.99HwghmHBL 00:20:03.799 [2024-07-15 13:58:58.465078] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:03.799 13:58:58 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3782016 00:20:03.799 13:58:58 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:03.799 13:58:58 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:03.799 13:58:58 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3782016 /var/tmp/bdevperf.sock 00:20:03.799 13:58:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3782016 ']' 00:20:03.799 13:58:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:03.799 13:58:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:03.799 13:58:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:03.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:03.799 13:58:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:03.799 13:58:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.799 [2024-07-15 13:58:58.528461] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:20:03.799 [2024-07-15 13:58:58.528529] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3782016 ] 00:20:03.799 EAL: No free 2048 kB hugepages reported on node 1 00:20:03.799 [2024-07-15 13:58:58.585932] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.056 [2024-07-15 13:58:58.692243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.056 13:58:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:04.056 13:58:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:04.056 13:58:58 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.99HwghmHBL 00:20:04.315 13:58:59 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:04.572 [2024-07-15 13:58:59.280506] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:04.572 nvme0n1 00:20:04.572 13:58:59 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:04.829 Running I/O for 1 seconds... 00:20:05.764 00:20:05.764 Latency(us) 00:20:05.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.764 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:05.764 Verification LBA range: start 0x0 length 0x2000 00:20:05.764 nvme0n1 : 1.02 3637.63 14.21 0.00 0.00 34837.65 7718.68 34758.35 00:20:05.764 =================================================================================================================== 00:20:05.764 Total : 3637.63 14.21 0.00 0.00 34837.65 7718.68 34758.35 00:20:05.764 0 00:20:05.764 13:59:00 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3782016 00:20:05.764 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3782016 ']' 00:20:05.764 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3782016 00:20:05.764 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:05.764 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:05.764 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3782016 00:20:05.764 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:05.764 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:05.764 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3782016' 00:20:05.764 killing process with pid 3782016 00:20:05.764 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3782016 00:20:05.764 Received shutdown signal, test time was about 1.000000 seconds 00:20:05.764 00:20:05.764 Latency(us) 00:20:05.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.764 =================================================================================================================== 00:20:05.764 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:05.764 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3782016 00:20:06.022 13:59:00 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3781738 00:20:06.022 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3781738 ']' 00:20:06.022 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3781738 00:20:06.022 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:06.022 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:06.022 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3781738 00:20:06.022 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:06.022 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:06.022 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3781738' 00:20:06.022 killing process with pid 3781738 00:20:06.022 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3781738 00:20:06.022 [2024-07-15 13:59:00.837956] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:06.022 13:59:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3781738 00:20:06.280 13:59:01 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:20:06.280 13:59:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:06.280 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:06.280 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.538 13:59:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3782301 00:20:06.538 13:59:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:06.538 13:59:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3782301 00:20:06.538 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3782301 ']' 00:20:06.538 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.538 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:06.538 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.538 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:06.538 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.538 [2024-07-15 13:59:01.172511] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:20:06.538 [2024-07-15 13:59:01.172590] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.538 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.538 [2024-07-15 13:59:01.234538] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.538 [2024-07-15 13:59:01.344037] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.538 [2024-07-15 13:59:01.344107] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.538 [2024-07-15 13:59:01.344135] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:06.538 [2024-07-15 13:59:01.344146] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:06.538 [2024-07-15 13:59:01.344156] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.538 [2024-07-15 13:59:01.344187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.796 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:06.796 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:06.796 13:59:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:06.796 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:06.796 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.796 13:59:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.796 13:59:01 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:20:06.796 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.796 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.796 [2024-07-15 13:59:01.495370] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.796 malloc0 00:20:06.796 [2024-07-15 13:59:01.527357] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:06.796 [2024-07-15 13:59:01.527608] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.796 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.796 13:59:01 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=3782443 00:20:06.796 13:59:01 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:06.796 13:59:01 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 3782443 /var/tmp/bdevperf.sock 00:20:06.796 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3782443 ']' 00:20:06.796 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:06.796 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:06.796 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:06.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:06.796 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:06.796 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.796 [2024-07-15 13:59:01.594870] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:20:06.796 [2024-07-15 13:59:01.594949] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3782443 ] 00:20:06.796 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.054 [2024-07-15 13:59:01.653598] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.054 [2024-07-15 13:59:01.759223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.054 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:07.054 13:59:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:07.055 13:59:01 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.99HwghmHBL 00:20:07.312 13:59:02 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:07.570 [2024-07-15 13:59:02.382550] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:07.836 nvme0n1 00:20:07.836 13:59:02 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:07.836 Running I/O for 1 seconds... 00:20:08.843 00:20:08.843 Latency(us) 00:20:08.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.843 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:08.843 Verification LBA range: start 0x0 length 0x2000 00:20:08.843 nvme0n1 : 1.03 3436.74 13.42 0.00 0.00 36837.66 6359.42 42719.76 00:20:08.843 =================================================================================================================== 00:20:08.843 Total : 3436.74 13.42 0.00 0.00 36837.66 6359.42 42719.76 00:20:08.843 0 00:20:08.843 13:59:03 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:20:08.843 13:59:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.843 13:59:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.101 13:59:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.101 13:59:03 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:20:09.101 "subsystems": [ 00:20:09.101 { 00:20:09.101 "subsystem": "keyring", 00:20:09.101 "config": [ 00:20:09.101 { 00:20:09.101 "method": "keyring_file_add_key", 00:20:09.101 "params": { 00:20:09.101 "name": "key0", 00:20:09.101 "path": "/tmp/tmp.99HwghmHBL" 00:20:09.101 } 00:20:09.101 } 00:20:09.101 ] 00:20:09.101 }, 00:20:09.101 { 00:20:09.101 "subsystem": "iobuf", 00:20:09.101 "config": [ 00:20:09.101 { 00:20:09.101 "method": "iobuf_set_options", 00:20:09.102 "params": { 00:20:09.102 "small_pool_count": 8192, 00:20:09.102 "large_pool_count": 1024, 00:20:09.102 "small_bufsize": 8192, 00:20:09.102 "large_bufsize": 135168 00:20:09.102 } 00:20:09.102 } 00:20:09.102 ] 00:20:09.102 }, 00:20:09.102 { 00:20:09.102 "subsystem": "sock", 00:20:09.102 "config": [ 00:20:09.102 { 00:20:09.102 "method": "sock_set_default_impl", 00:20:09.102 "params": { 00:20:09.102 "impl_name": "posix" 00:20:09.102 } 00:20:09.102 }, 00:20:09.102 { 00:20:09.102 "method": "sock_impl_set_options", 00:20:09.102 "params": { 00:20:09.102 "impl_name": "ssl", 00:20:09.102 "recv_buf_size": 4096, 00:20:09.102 "send_buf_size": 4096, 00:20:09.102 "enable_recv_pipe": true, 00:20:09.102 "enable_quickack": false, 00:20:09.102 "enable_placement_id": 0, 00:20:09.102 "enable_zerocopy_send_server": true, 00:20:09.102 "enable_zerocopy_send_client": false, 00:20:09.102 "zerocopy_threshold": 0, 00:20:09.102 "tls_version": 0, 00:20:09.102 "enable_ktls": false 00:20:09.102 } 00:20:09.102 }, 00:20:09.102 { 00:20:09.102 "method": "sock_impl_set_options", 00:20:09.102 "params": { 00:20:09.102 "impl_name": "posix", 00:20:09.102 "recv_buf_size": 2097152, 00:20:09.102 "send_buf_size": 2097152, 00:20:09.102 "enable_recv_pipe": true, 00:20:09.102 "enable_quickack": false, 00:20:09.102 "enable_placement_id": 0, 00:20:09.102 "enable_zerocopy_send_server": true, 00:20:09.102 "enable_zerocopy_send_client": false, 00:20:09.102 "zerocopy_threshold": 0, 00:20:09.102 "tls_version": 0, 00:20:09.102 "enable_ktls": false 00:20:09.102 } 00:20:09.102 } 00:20:09.102 ] 00:20:09.102 }, 00:20:09.102 { 00:20:09.102 "subsystem": "vmd", 00:20:09.102 "config": [] 00:20:09.102 }, 00:20:09.102 { 00:20:09.102 "subsystem": "accel", 00:20:09.102 "config": [ 00:20:09.102 { 00:20:09.102 "method": "accel_set_options", 00:20:09.102 "params": { 00:20:09.102 "small_cache_size": 128, 00:20:09.102 "large_cache_size": 16, 00:20:09.102 "task_count": 2048, 00:20:09.102 "sequence_count": 2048, 00:20:09.102 "buf_count": 2048 00:20:09.102 } 00:20:09.102 } 00:20:09.102 ] 00:20:09.102 }, 00:20:09.102 { 00:20:09.102 "subsystem": "bdev", 00:20:09.102 "config": [ 00:20:09.102 { 00:20:09.102 "method": "bdev_set_options", 00:20:09.102 "params": { 00:20:09.102 "bdev_io_pool_size": 65535, 00:20:09.102 "bdev_io_cache_size": 256, 00:20:09.102 "bdev_auto_examine": true, 00:20:09.102 "iobuf_small_cache_size": 128, 00:20:09.102 "iobuf_large_cache_size": 16 00:20:09.102 } 00:20:09.102 }, 00:20:09.102 { 00:20:09.102 "method": "bdev_raid_set_options", 00:20:09.102 "params": { 00:20:09.102 "process_window_size_kb": 1024 00:20:09.102 } 00:20:09.102 }, 00:20:09.102 { 00:20:09.102 "method": "bdev_iscsi_set_options", 00:20:09.102 "params": { 00:20:09.102 "timeout_sec": 30 00:20:09.102 } 00:20:09.102 }, 00:20:09.102 { 00:20:09.102 "method": "bdev_nvme_set_options", 00:20:09.102 "params": { 00:20:09.102 "action_on_timeout": "none", 00:20:09.102 "timeout_us": 0, 00:20:09.102 "timeout_admin_us": 0, 00:20:09.102 "keep_alive_timeout_ms": 10000, 00:20:09.102 "arbitration_burst": 0, 00:20:09.102 "low_priority_weight": 0, 00:20:09.102 "medium_priority_weight": 0, 00:20:09.102 "high_priority_weight": 0, 00:20:09.102 "nvme_adminq_poll_period_us": 10000, 00:20:09.102 "nvme_ioq_poll_period_us": 0, 00:20:09.102 "io_queue_requests": 0, 00:20:09.102 "delay_cmd_submit": true, 00:20:09.102 "transport_retry_count": 4, 00:20:09.102 "bdev_retry_count": 3, 00:20:09.102 "transport_ack_timeout": 0, 00:20:09.102 "ctrlr_loss_timeout_sec": 0, 00:20:09.102 "reconnect_delay_sec": 0, 00:20:09.102 "fast_io_fail_timeout_sec": 0, 00:20:09.102 "disable_auto_failback": false, 00:20:09.102 "generate_uuids": false, 00:20:09.102 "transport_tos": 0, 00:20:09.102 "nvme_error_stat": false, 00:20:09.102 "rdma_srq_size": 0, 00:20:09.102 "io_path_stat": false, 00:20:09.102 "allow_accel_sequence": false, 00:20:09.102 "rdma_max_cq_size": 0, 00:20:09.102 "rdma_cm_event_timeout_ms": 0, 00:20:09.102 "dhchap_digests": [ 00:20:09.102 "sha256", 00:20:09.102 "sha384", 00:20:09.102 "sha512" 00:20:09.102 ], 00:20:09.102 "dhchap_dhgroups": [ 00:20:09.102 "null", 00:20:09.102 "ffdhe2048", 00:20:09.102 "ffdhe3072", 00:20:09.102 "ffdhe4096", 00:20:09.102 "ffdhe6144", 00:20:09.102 "ffdhe8192" 00:20:09.102 ] 00:20:09.102 } 00:20:09.102 }, 00:20:09.102 { 00:20:09.102 "method": "bdev_nvme_set_hotplug", 00:20:09.102 "params": { 00:20:09.102 "period_us": 100000, 00:20:09.102 "enable": false 00:20:09.102 } 00:20:09.102 }, 00:20:09.102 { 00:20:09.102 "method": "bdev_malloc_create", 00:20:09.102 "params": { 00:20:09.102 "name": "malloc0", 00:20:09.102 "num_blocks": 8192, 00:20:09.102 "block_size": 4096, 00:20:09.102 "physical_block_size": 4096, 00:20:09.102 "uuid": "8ae1ac7e-f4d6-4ae9-bf9e-d6d2dddd7df9", 00:20:09.102 "optimal_io_boundary": 0 00:20:09.102 } 00:20:09.102 }, 00:20:09.102 { 00:20:09.102 "method": "bdev_wait_for_examine" 00:20:09.102 } 00:20:09.102 ] 00:20:09.102 }, 00:20:09.102 { 00:20:09.102 "subsystem": "nbd", 00:20:09.102 "config": [] 00:20:09.102 }, 00:20:09.102 { 00:20:09.102 "subsystem": "scheduler", 00:20:09.102 "config": [ 00:20:09.102 { 00:20:09.102 "method": "framework_set_scheduler", 00:20:09.102 "params": { 00:20:09.102 "name": "static" 00:20:09.102 } 00:20:09.102 } 00:20:09.102 ] 00:20:09.102 }, 00:20:09.102 { 00:20:09.102 "subsystem": "nvmf", 00:20:09.102 "config": [ 00:20:09.102 { 00:20:09.102 "method": "nvmf_set_config", 00:20:09.102 "params": { 00:20:09.102 "discovery_filter": "match_any", 00:20:09.102 "admin_cmd_passthru": { 00:20:09.102 "identify_ctrlr": false 00:20:09.102 } 00:20:09.102 } 00:20:09.102 }, 00:20:09.102 { 00:20:09.102 "method": "nvmf_set_max_subsystems", 00:20:09.102 "params": { 00:20:09.102 "max_subsystems": 1024 00:20:09.102 } 00:20:09.102 }, 00:20:09.102 { 00:20:09.102 "method": "nvmf_set_crdt", 00:20:09.102 "params": { 00:20:09.102 "crdt1": 0, 00:20:09.102 "crdt2": 0, 00:20:09.102 "crdt3": 0 00:20:09.102 } 00:20:09.102 }, 00:20:09.102 { 00:20:09.102 "method": "nvmf_create_transport", 00:20:09.102 "params": { 00:20:09.102 "trtype": "TCP", 00:20:09.102 "max_queue_depth": 128, 00:20:09.102 "max_io_qpairs_per_ctrlr": 127, 00:20:09.102 "in_capsule_data_size": 4096, 00:20:09.102 "max_io_size": 131072, 00:20:09.102 "io_unit_size": 131072, 00:20:09.102 "max_aq_depth": 128, 00:20:09.102 "num_shared_buffers": 511, 00:20:09.102 "buf_cache_size": 4294967295, 00:20:09.102 "dif_insert_or_strip": false, 00:20:09.102 "zcopy": false, 00:20:09.102 "c2h_success": false, 00:20:09.102 "sock_priority": 0, 00:20:09.102 "abort_timeout_sec": 1, 00:20:09.102 "ack_timeout": 0, 00:20:09.102 "data_wr_pool_size": 0 00:20:09.102 } 00:20:09.102 }, 00:20:09.102 { 00:20:09.102 "method": "nvmf_create_subsystem", 00:20:09.102 "params": { 00:20:09.102 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.102 "allow_any_host": false, 00:20:09.102 "serial_number": "00000000000000000000", 00:20:09.102 "model_number": "SPDK bdev Controller", 00:20:09.102 "max_namespaces": 32, 00:20:09.102 "min_cntlid": 1, 00:20:09.102 "max_cntlid": 65519, 00:20:09.102 "ana_reporting": false 00:20:09.102 } 00:20:09.102 }, 00:20:09.102 { 00:20:09.102 "method": "nvmf_subsystem_add_host", 00:20:09.102 "params": { 00:20:09.102 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.102 "host": "nqn.2016-06.io.spdk:host1", 00:20:09.102 "psk": "key0" 00:20:09.102 } 00:20:09.102 }, 00:20:09.102 { 00:20:09.102 "method": "nvmf_subsystem_add_ns", 00:20:09.102 "params": { 00:20:09.102 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.102 "namespace": { 00:20:09.102 "nsid": 1, 00:20:09.102 "bdev_name": "malloc0", 00:20:09.102 "nguid": "8AE1AC7EF4D64AE9BF9ED6D2DDDD7DF9", 00:20:09.102 "uuid": "8ae1ac7e-f4d6-4ae9-bf9e-d6d2dddd7df9", 00:20:09.102 "no_auto_visible": false 00:20:09.102 } 00:20:09.103 } 00:20:09.103 }, 00:20:09.103 { 00:20:09.103 "method": "nvmf_subsystem_add_listener", 00:20:09.103 "params": { 00:20:09.103 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.103 "listen_address": { 00:20:09.103 "trtype": "TCP", 00:20:09.103 "adrfam": "IPv4", 00:20:09.103 "traddr": "10.0.0.2", 00:20:09.103 "trsvcid": "4420" 00:20:09.103 }, 00:20:09.103 "secure_channel": true 00:20:09.103 } 00:20:09.103 } 00:20:09.103 ] 00:20:09.103 } 00:20:09.103 ] 00:20:09.103 }' 00:20:09.103 13:59:03 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:09.360 13:59:04 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:20:09.360 "subsystems": [ 00:20:09.360 { 00:20:09.360 "subsystem": "keyring", 00:20:09.360 "config": [ 00:20:09.360 { 00:20:09.360 "method": "keyring_file_add_key", 00:20:09.360 "params": { 00:20:09.360 "name": "key0", 00:20:09.360 "path": "/tmp/tmp.99HwghmHBL" 00:20:09.360 } 00:20:09.360 } 00:20:09.360 ] 00:20:09.360 }, 00:20:09.360 { 00:20:09.360 "subsystem": "iobuf", 00:20:09.360 "config": [ 00:20:09.360 { 00:20:09.360 "method": "iobuf_set_options", 00:20:09.360 "params": { 00:20:09.360 "small_pool_count": 8192, 00:20:09.360 "large_pool_count": 1024, 00:20:09.360 "small_bufsize": 8192, 00:20:09.360 "large_bufsize": 135168 00:20:09.361 } 00:20:09.361 } 00:20:09.361 ] 00:20:09.361 }, 00:20:09.361 { 00:20:09.361 "subsystem": "sock", 00:20:09.361 "config": [ 00:20:09.361 { 00:20:09.361 "method": "sock_set_default_impl", 00:20:09.361 "params": { 00:20:09.361 "impl_name": "posix" 00:20:09.361 } 00:20:09.361 }, 00:20:09.361 { 00:20:09.361 "method": "sock_impl_set_options", 00:20:09.361 "params": { 00:20:09.361 "impl_name": "ssl", 00:20:09.361 "recv_buf_size": 4096, 00:20:09.361 "send_buf_size": 4096, 00:20:09.361 "enable_recv_pipe": true, 00:20:09.361 "enable_quickack": false, 00:20:09.361 "enable_placement_id": 0, 00:20:09.361 "enable_zerocopy_send_server": true, 00:20:09.361 "enable_zerocopy_send_client": false, 00:20:09.361 "zerocopy_threshold": 0, 00:20:09.361 "tls_version": 0, 00:20:09.361 "enable_ktls": false 00:20:09.361 } 00:20:09.361 }, 00:20:09.361 { 00:20:09.361 "method": "sock_impl_set_options", 00:20:09.361 "params": { 00:20:09.361 "impl_name": "posix", 00:20:09.361 "recv_buf_size": 2097152, 00:20:09.361 "send_buf_size": 2097152, 00:20:09.361 "enable_recv_pipe": true, 00:20:09.361 "enable_quickack": false, 00:20:09.361 "enable_placement_id": 0, 00:20:09.361 "enable_zerocopy_send_server": true, 00:20:09.361 "enable_zerocopy_send_client": false, 00:20:09.361 "zerocopy_threshold": 0, 00:20:09.361 "tls_version": 0, 00:20:09.361 "enable_ktls": false 00:20:09.361 } 00:20:09.361 } 00:20:09.361 ] 00:20:09.361 }, 00:20:09.361 { 00:20:09.361 "subsystem": "vmd", 00:20:09.361 "config": [] 00:20:09.361 }, 00:20:09.361 { 00:20:09.361 "subsystem": "accel", 00:20:09.361 "config": [ 00:20:09.361 { 00:20:09.361 "method": "accel_set_options", 00:20:09.361 "params": { 00:20:09.361 "small_cache_size": 128, 00:20:09.361 "large_cache_size": 16, 00:20:09.361 "task_count": 2048, 00:20:09.361 "sequence_count": 2048, 00:20:09.361 "buf_count": 2048 00:20:09.361 } 00:20:09.361 } 00:20:09.361 ] 00:20:09.361 }, 00:20:09.361 { 00:20:09.361 "subsystem": "bdev", 00:20:09.361 "config": [ 00:20:09.361 { 00:20:09.361 "method": "bdev_set_options", 00:20:09.361 "params": { 00:20:09.361 "bdev_io_pool_size": 65535, 00:20:09.361 "bdev_io_cache_size": 256, 00:20:09.361 "bdev_auto_examine": true, 00:20:09.361 "iobuf_small_cache_size": 128, 00:20:09.361 "iobuf_large_cache_size": 16 00:20:09.361 } 00:20:09.361 }, 00:20:09.361 { 00:20:09.361 "method": "bdev_raid_set_options", 00:20:09.361 "params": { 00:20:09.361 "process_window_size_kb": 1024 00:20:09.361 } 00:20:09.361 }, 00:20:09.361 { 00:20:09.361 "method": "bdev_iscsi_set_options", 00:20:09.361 "params": { 00:20:09.361 "timeout_sec": 30 00:20:09.361 } 00:20:09.361 }, 00:20:09.361 { 00:20:09.361 "method": "bdev_nvme_set_options", 00:20:09.361 "params": { 00:20:09.361 "action_on_timeout": "none", 00:20:09.361 "timeout_us": 0, 00:20:09.361 "timeout_admin_us": 0, 00:20:09.361 "keep_alive_timeout_ms": 10000, 00:20:09.361 "arbitration_burst": 0, 00:20:09.361 "low_priority_weight": 0, 00:20:09.361 "medium_priority_weight": 0, 00:20:09.361 "high_priority_weight": 0, 00:20:09.361 "nvme_adminq_poll_period_us": 10000, 00:20:09.361 "nvme_ioq_poll_period_us": 0, 00:20:09.361 "io_queue_requests": 512, 00:20:09.361 "delay_cmd_submit": true, 00:20:09.361 "transport_retry_count": 4, 00:20:09.361 "bdev_retry_count": 3, 00:20:09.361 "transport_ack_timeout": 0, 00:20:09.361 "ctrlr_loss_timeout_sec": 0, 00:20:09.361 "reconnect_delay_sec": 0, 00:20:09.361 "fast_io_fail_timeout_sec": 0, 00:20:09.361 "disable_auto_failback": false, 00:20:09.361 "generate_uuids": false, 00:20:09.361 "transport_tos": 0, 00:20:09.361 "nvme_error_stat": false, 00:20:09.361 "rdma_srq_size": 0, 00:20:09.361 "io_path_stat": false, 00:20:09.361 "allow_accel_sequence": false, 00:20:09.361 "rdma_max_cq_size": 0, 00:20:09.361 "rdma_cm_event_timeout_ms": 0, 00:20:09.361 "dhchap_digests": [ 00:20:09.361 "sha256", 00:20:09.361 "sha384", 00:20:09.361 "sha512" 00:20:09.361 ], 00:20:09.361 "dhchap_dhgroups": [ 00:20:09.361 "null", 00:20:09.361 "ffdhe2048", 00:20:09.361 "ffdhe3072", 00:20:09.361 "ffdhe4096", 00:20:09.361 "ffdhe6144", 00:20:09.361 "ffdhe8192" 00:20:09.361 ] 00:20:09.361 } 00:20:09.361 }, 00:20:09.361 { 00:20:09.361 "method": "bdev_nvme_attach_controller", 00:20:09.361 "params": { 00:20:09.361 "name": "nvme0", 00:20:09.361 "trtype": "TCP", 00:20:09.361 "adrfam": "IPv4", 00:20:09.361 "traddr": "10.0.0.2", 00:20:09.361 "trsvcid": "4420", 00:20:09.361 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.361 "prchk_reftag": false, 00:20:09.361 "prchk_guard": false, 00:20:09.361 "ctrlr_loss_timeout_sec": 0, 00:20:09.361 "reconnect_delay_sec": 0, 00:20:09.361 "fast_io_fail_timeout_sec": 0, 00:20:09.361 "psk": "key0", 00:20:09.361 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:09.361 "hdgst": false, 00:20:09.361 "ddgst": false 00:20:09.361 } 00:20:09.361 }, 00:20:09.361 { 00:20:09.361 "method": "bdev_nvme_set_hotplug", 00:20:09.361 "params": { 00:20:09.361 "period_us": 100000, 00:20:09.361 "enable": false 00:20:09.361 } 00:20:09.361 }, 00:20:09.361 { 00:20:09.361 "method": "bdev_enable_histogram", 00:20:09.361 "params": { 00:20:09.361 "name": "nvme0n1", 00:20:09.361 "enable": true 00:20:09.361 } 00:20:09.361 }, 00:20:09.361 { 00:20:09.361 "method": "bdev_wait_for_examine" 00:20:09.361 } 00:20:09.361 ] 00:20:09.361 }, 00:20:09.361 { 00:20:09.361 "subsystem": "nbd", 00:20:09.361 "config": [] 00:20:09.361 } 00:20:09.361 ] 00:20:09.361 }' 00:20:09.361 13:59:04 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 3782443 00:20:09.361 13:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3782443 ']' 00:20:09.361 13:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3782443 00:20:09.361 13:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:09.361 13:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:09.361 13:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3782443 00:20:09.361 13:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:09.361 13:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:09.361 13:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3782443' 00:20:09.361 killing process with pid 3782443 00:20:09.361 13:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3782443 00:20:09.361 Received shutdown signal, test time was about 1.000000 seconds 00:20:09.361 00:20:09.361 Latency(us) 00:20:09.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.361 =================================================================================================================== 00:20:09.361 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:09.361 13:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3782443 00:20:09.619 13:59:04 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 3782301 00:20:09.619 13:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3782301 ']' 00:20:09.619 13:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3782301 00:20:09.619 13:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:09.619 13:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:09.619 13:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3782301 00:20:09.619 13:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:09.619 13:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:09.619 13:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3782301' 00:20:09.619 killing process with pid 3782301 00:20:09.619 13:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3782301 00:20:09.619 13:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3782301 00:20:09.877 13:59:04 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:20:09.877 13:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:09.877 13:59:04 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:20:09.877 "subsystems": [ 00:20:09.877 { 00:20:09.877 "subsystem": "keyring", 00:20:09.877 "config": [ 00:20:09.877 { 00:20:09.877 "method": "keyring_file_add_key", 00:20:09.877 "params": { 00:20:09.877 "name": "key0", 00:20:09.877 "path": "/tmp/tmp.99HwghmHBL" 00:20:09.877 } 00:20:09.877 } 00:20:09.877 ] 00:20:09.877 }, 00:20:09.877 { 00:20:09.877 "subsystem": "iobuf", 00:20:09.877 "config": [ 00:20:09.877 { 00:20:09.877 "method": "iobuf_set_options", 00:20:09.877 "params": { 00:20:09.877 "small_pool_count": 8192, 00:20:09.877 "large_pool_count": 1024, 00:20:09.877 "small_bufsize": 8192, 00:20:09.877 "large_bufsize": 135168 00:20:09.877 } 00:20:09.877 } 00:20:09.877 ] 00:20:09.877 }, 00:20:09.877 { 00:20:09.877 "subsystem": "sock", 00:20:09.877 "config": [ 00:20:09.877 { 00:20:09.877 "method": "sock_set_default_impl", 00:20:09.877 "params": { 00:20:09.877 "impl_name": "posix" 00:20:09.877 } 00:20:09.877 }, 00:20:09.877 { 00:20:09.877 "method": "sock_impl_set_options", 00:20:09.877 "params": { 00:20:09.877 "impl_name": "ssl", 00:20:09.877 "recv_buf_size": 4096, 00:20:09.877 "send_buf_size": 4096, 00:20:09.877 "enable_recv_pipe": true, 00:20:09.877 "enable_quickack": false, 00:20:09.877 "enable_placement_id": 0, 00:20:09.877 "enable_zerocopy_send_server": true, 00:20:09.877 "enable_zerocopy_send_client": false, 00:20:09.877 "zerocopy_threshold": 0, 00:20:09.877 "tls_version": 0, 00:20:09.877 "enable_ktls": false 00:20:09.877 } 00:20:09.877 }, 00:20:09.877 { 00:20:09.877 "method": "sock_impl_set_options", 00:20:09.877 "params": { 00:20:09.877 "impl_name": "posix", 00:20:09.877 "recv_buf_size": 2097152, 00:20:09.877 "send_buf_size": 2097152, 00:20:09.877 "enable_recv_pipe": true, 00:20:09.877 "enable_quickack": false, 00:20:09.877 "enable_placement_id": 0, 00:20:09.877 "enable_zerocopy_send_server": true, 00:20:09.878 "enable_zerocopy_send_client": false, 00:20:09.878 "zerocopy_threshold": 0, 00:20:09.878 "tls_version": 0, 00:20:09.878 "enable_ktls": false 00:20:09.878 } 00:20:09.878 } 00:20:09.878 ] 00:20:09.878 }, 00:20:09.878 { 00:20:09.878 "subsystem": "vmd", 00:20:09.878 "config": [] 00:20:09.878 }, 00:20:09.878 { 00:20:09.878 "subsystem": "accel", 00:20:09.878 "config": [ 00:20:09.878 { 00:20:09.878 "method": "accel_set_options", 00:20:09.878 "params": { 00:20:09.878 "small_cache_size": 128, 00:20:09.878 "large_cache_size": 16, 00:20:09.878 "task_count": 2048, 00:20:09.878 "sequence_count": 2048, 00:20:09.878 "buf_count": 2048 00:20:09.878 } 00:20:09.878 } 00:20:09.878 ] 00:20:09.878 }, 00:20:09.878 { 00:20:09.878 "subsystem": "bdev", 00:20:09.878 "config": [ 00:20:09.878 { 00:20:09.878 "method": "bdev_set_options", 00:20:09.878 "params": { 00:20:09.878 "bdev_io_pool_size": 65535, 00:20:09.878 "bdev_io_cache_size": 256, 00:20:09.878 "bdev_auto_examine": true, 00:20:09.878 "iobuf_small_cache_size": 128, 00:20:09.878 "iobuf_large_cache_size": 16 00:20:09.878 } 00:20:09.878 }, 00:20:09.878 { 00:20:09.878 "method": "bdev_raid_set_options", 00:20:09.878 "params": { 00:20:09.878 "process_window_size_kb": 1024 00:20:09.878 } 00:20:09.878 }, 00:20:09.878 { 00:20:09.878 "method": "bdev_iscsi_set_options", 00:20:09.878 "params": { 00:20:09.878 "timeout_sec": 30 00:20:09.878 } 00:20:09.878 }, 00:20:09.878 { 00:20:09.878 "method": "bdev_nvme_set_options", 00:20:09.878 "params": { 00:20:09.878 "action_on_timeout": "none", 00:20:09.878 "timeout_us": 0, 00:20:09.878 "timeout_admin_us": 0, 00:20:09.878 "keep_alive_timeout_ms": 10000, 00:20:09.878 "arbitration_burst": 0, 00:20:09.878 "low_priority_weight": 0, 00:20:09.878 "medium_priority_weight": 0, 00:20:09.878 "high_priority_weight": 0, 00:20:09.878 "nvme_adminq_poll_period_us": 10000, 00:20:09.878 "nvme_ioq_poll_period_us": 0, 00:20:09.878 "io_queue_requests": 0, 00:20:09.878 "delay_cmd_submit": true, 00:20:09.878 "transport_retry_count": 4, 00:20:09.878 "bdev_retry_count": 3, 00:20:09.878 "transport_ack_timeout": 0, 00:20:09.878 "ctrlr_loss_timeout_sec": 0, 00:20:09.878 "reconnect_delay_sec": 0, 00:20:09.878 "fast_io_fail_timeout_sec": 0, 00:20:09.878 "disable_auto_failback": false, 00:20:09.878 "generate_uuids": false, 00:20:09.878 "transport_tos": 0, 00:20:09.878 "nvme_error_stat": false, 00:20:09.878 "rdma_srq_size": 0, 00:20:09.878 "io_path_stat": false, 00:20:09.878 "allow_accel_sequence": false, 00:20:09.878 "rdma_max_cq_size": 0, 00:20:09.878 "rdma_cm_event_timeout_ms": 0, 00:20:09.878 "dhchap_digests": [ 00:20:09.878 "sha256", 00:20:09.878 "sha384", 00:20:09.878 "sha512" 00:20:09.878 ], 00:20:09.878 "dhchap_dhgroups": [ 00:20:09.878 "null", 00:20:09.878 "ffdhe2048", 00:20:09.878 "ffdhe3072", 00:20:09.878 "ffdhe4096", 00:20:09.878 "ffdhe6144", 00:20:09.878 "ffdhe8192" 00:20:09.878 ] 00:20:09.878 } 00:20:09.878 }, 00:20:09.878 { 00:20:09.878 "method": "bdev_nvme_set_hotplug", 00:20:09.878 "params": { 00:20:09.878 "period_us": 100000, 00:20:09.878 "enable": false 00:20:09.878 } 00:20:09.878 }, 00:20:09.878 { 00:20:09.878 "method": "bdev_malloc_create", 00:20:09.878 "params": { 00:20:09.878 "name": "malloc0", 00:20:09.878 "num_blocks": 8192, 00:20:09.878 "block_size": 4096, 00:20:09.878 "physical_block_size": 4096, 00:20:09.878 "uuid": "8ae1ac7e-f4d6-4ae9-bf9e-d6d2dddd7df9", 00:20:09.878 "optimal_io_boundary": 0 00:20:09.878 } 00:20:09.878 }, 00:20:09.878 { 00:20:09.878 "method": "bdev_wait_for_examine" 00:20:09.878 } 00:20:09.878 ] 00:20:09.878 }, 00:20:09.878 { 00:20:09.878 "subsystem": "nbd", 00:20:09.878 "config": [] 00:20:09.878 }, 00:20:09.878 { 00:20:09.878 "subsystem": "scheduler", 00:20:09.878 "config": [ 00:20:09.878 { 00:20:09.878 "method": "framework_set_scheduler", 00:20:09.878 "params": { 00:20:09.878 "name": "static" 00:20:09.878 } 00:20:09.878 } 00:20:09.878 ] 00:20:09.878 }, 00:20:09.878 { 00:20:09.878 "subsystem": "nvmf", 00:20:09.878 "config": [ 00:20:09.878 { 00:20:09.878 "method": "nvmf_set_config", 00:20:09.878 "params": { 00:20:09.878 "discovery_filter": "match_any", 00:20:09.878 "admin_cmd_passthru": { 00:20:09.878 "identify_ctrlr": false 00:20:09.878 } 00:20:09.878 } 00:20:09.878 }, 00:20:09.878 { 00:20:09.878 "method": "nvmf_set_max_subsystems", 00:20:09.878 "params": { 00:20:09.878 "max_subsystems": 1024 00:20:09.878 } 00:20:09.878 }, 00:20:09.878 { 00:20:09.878 "method": "nvmf_set_crdt", 00:20:09.878 "params": { 00:20:09.878 "crdt1": 0, 00:20:09.878 "crdt2": 0, 00:20:09.878 "crdt3": 0 00:20:09.878 } 00:20:09.878 }, 00:20:09.878 { 00:20:09.878 "method": "nvmf_create_transport", 00:20:09.878 "params": { 00:20:09.878 "trtype": "TCP", 00:20:09.878 "max_queue_depth": 128, 00:20:09.878 "max_io_qpairs_per_ctrlr": 127, 00:20:09.878 "in_capsule_data_size": 4096, 00:20:09.878 "max_io_size": 131072, 00:20:09.878 "io_unit_size": 131072, 00:20:09.878 "max_aq_depth": 128, 00:20:09.878 "num_shared_buffers": 511, 00:20:09.878 "buf_cache_size": 4294967295, 00:20:09.878 "dif_insert_or_strip": false, 00:20:09.878 "zcopy": false, 00:20:09.878 "c2h_success": false, 00:20:09.878 "sock_priority": 0, 00:20:09.878 "abort_timeout_sec": 1, 00:20:09.878 "ack_timeout": 0, 00:20:09.878 "data_wr_pool_size": 0 00:20:09.878 } 00:20:09.878 }, 00:20:09.878 { 00:20:09.878 "method": "nvmf_create_subsystem", 00:20:09.878 "params": { 00:20:09.878 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.878 "allow_any_host": false, 00:20:09.878 "serial_number": "00000000000000000000", 00:20:09.878 "model_number": "SPDK bdev Controller", 00:20:09.878 "max_namespaces": 32, 00:20:09.878 "min_cntlid": 1, 00:20:09.878 "max_cntlid": 65519, 00:20:09.878 "ana_reporting": false 00:20:09.878 } 00:20:09.878 }, 00:20:09.878 { 00:20:09.878 "method": "nvmf_subsystem_add_host", 00:20:09.878 "params": { 00:20:09.878 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.878 "host": "nqn.2016-06.io.spdk:host1", 00:20:09.878 "psk": "key0" 00:20:09.878 } 00:20:09.878 }, 00:20:09.878 { 00:20:09.878 "method": "nvmf_subsystem_add_ns", 00:20:09.878 "params": { 00:20:09.878 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.878 "namespace": { 00:20:09.878 "nsid": 1, 00:20:09.878 "bdev_name": "malloc0", 00:20:09.878 "nguid": "8AE1AC7EF4D64AE9BF9ED6D2DDDD7DF9", 00:20:09.878 "uuid": "8ae1ac7e-f4d6-4ae9-bf9e-d6d2dddd7df9", 00:20:09.878 "no_auto_visible": false 00:20:09.878 } 00:20:09.878 } 00:20:09.878 }, 00:20:09.878 { 00:20:09.878 "method": "nvmf_subsystem_add_listener", 00:20:09.878 "params": { 00:20:09.878 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.878 "listen_address": { 00:20:09.878 "trtype": "TCP", 00:20:09.878 "adrfam": "IPv4", 00:20:09.878 "traddr": "10.0.0.2", 00:20:09.878 "trsvcid": "4420" 00:20:09.878 }, 00:20:09.878 "secure_channel": true 00:20:09.878 } 00:20:09.878 } 00:20:09.878 ] 00:20:09.878 } 00:20:09.878 ] 00:20:09.878 }' 00:20:09.878 13:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:09.878 13:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.878 13:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3782754 00:20:09.878 13:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:09.878 13:59:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3782754 00:20:09.878 13:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3782754 ']' 00:20:09.878 13:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.878 13:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:09.878 13:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.878 13:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:09.878 13:59:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.878 [2024-07-15 13:59:04.665150] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:20:09.878 [2024-07-15 13:59:04.665226] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.878 EAL: No free 2048 kB hugepages reported on node 1 00:20:10.137 [2024-07-15 13:59:04.733292] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.137 [2024-07-15 13:59:04.843037] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.137 [2024-07-15 13:59:04.843104] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.137 [2024-07-15 13:59:04.843132] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:10.137 [2024-07-15 13:59:04.843142] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:10.137 [2024-07-15 13:59:04.843152] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.137 [2024-07-15 13:59:04.843229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.395 [2024-07-15 13:59:05.081719] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.395 [2024-07-15 13:59:05.113749] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:10.395 [2024-07-15 13:59:05.121940] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.963 13:59:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:10.963 13:59:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:10.963 13:59:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:10.963 13:59:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:10.963 13:59:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.963 13:59:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.963 13:59:05 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=3782903 00:20:10.963 13:59:05 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 3782903 /var/tmp/bdevperf.sock 00:20:10.963 13:59:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3782903 ']' 00:20:10.963 13:59:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:10.963 13:59:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:10.963 13:59:05 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:10.963 13:59:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:10.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:10.963 13:59:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:10.963 13:59:05 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:20:10.963 "subsystems": [ 00:20:10.963 { 00:20:10.963 "subsystem": "keyring", 00:20:10.963 "config": [ 00:20:10.963 { 00:20:10.963 "method": "keyring_file_add_key", 00:20:10.963 "params": { 00:20:10.963 "name": "key0", 00:20:10.963 "path": "/tmp/tmp.99HwghmHBL" 00:20:10.963 } 00:20:10.963 } 00:20:10.963 ] 00:20:10.963 }, 00:20:10.963 { 00:20:10.963 "subsystem": "iobuf", 00:20:10.963 "config": [ 00:20:10.963 { 00:20:10.963 "method": "iobuf_set_options", 00:20:10.963 "params": { 00:20:10.963 "small_pool_count": 8192, 00:20:10.963 "large_pool_count": 1024, 00:20:10.963 "small_bufsize": 8192, 00:20:10.963 "large_bufsize": 135168 00:20:10.963 } 00:20:10.963 } 00:20:10.963 ] 00:20:10.963 }, 00:20:10.963 { 00:20:10.963 "subsystem": "sock", 00:20:10.963 "config": [ 00:20:10.963 { 00:20:10.963 "method": "sock_set_default_impl", 00:20:10.963 "params": { 00:20:10.963 "impl_name": "posix" 00:20:10.963 } 00:20:10.963 }, 00:20:10.963 { 00:20:10.963 "method": "sock_impl_set_options", 00:20:10.963 "params": { 00:20:10.963 "impl_name": "ssl", 00:20:10.963 "recv_buf_size": 4096, 00:20:10.963 "send_buf_size": 4096, 00:20:10.963 "enable_recv_pipe": true, 00:20:10.963 "enable_quickack": false, 00:20:10.963 "enable_placement_id": 0, 00:20:10.963 "enable_zerocopy_send_server": true, 00:20:10.963 "enable_zerocopy_send_client": false, 00:20:10.963 "zerocopy_threshold": 0, 00:20:10.963 "tls_version": 0, 00:20:10.963 "enable_ktls": false 00:20:10.963 } 00:20:10.963 }, 00:20:10.963 { 00:20:10.963 "method": "sock_impl_set_options", 00:20:10.963 "params": { 00:20:10.963 "impl_name": "posix", 00:20:10.963 "recv_buf_size": 2097152, 00:20:10.963 "send_buf_size": 2097152, 00:20:10.963 "enable_recv_pipe": true, 00:20:10.963 "enable_quickack": false, 00:20:10.963 "enable_placement_id": 0, 00:20:10.963 "enable_zerocopy_send_server": true, 00:20:10.963 "enable_zerocopy_send_client": false, 00:20:10.963 "zerocopy_threshold": 0, 00:20:10.963 "tls_version": 0, 00:20:10.963 "enable_ktls": false 00:20:10.963 } 00:20:10.963 } 00:20:10.963 ] 00:20:10.963 }, 00:20:10.963 { 00:20:10.963 "subsystem": "vmd", 00:20:10.963 "config": [] 00:20:10.963 }, 00:20:10.963 { 00:20:10.963 "subsystem": "accel", 00:20:10.963 "config": [ 00:20:10.963 { 00:20:10.963 "method": "accel_set_options", 00:20:10.963 "params": { 00:20:10.963 "small_cache_size": 128, 00:20:10.963 "large_cache_size": 16, 00:20:10.963 "task_count": 2048, 00:20:10.963 "sequence_count": 2048, 00:20:10.963 "buf_count": 2048 00:20:10.963 } 00:20:10.963 } 00:20:10.963 ] 00:20:10.963 }, 00:20:10.963 { 00:20:10.963 "subsystem": "bdev", 00:20:10.963 "config": [ 00:20:10.963 { 00:20:10.963 "method": "bdev_set_options", 00:20:10.963 "params": { 00:20:10.963 "bdev_io_pool_size": 65535, 00:20:10.963 "bdev_io_cache_size": 256, 00:20:10.963 "bdev_auto_examine": true, 00:20:10.963 "iobuf_small_cache_size": 128, 00:20:10.963 "iobuf_large_cache_size": 16 00:20:10.963 } 00:20:10.963 }, 00:20:10.963 { 00:20:10.963 "method": "bdev_raid_set_options", 00:20:10.963 "params": { 00:20:10.963 "process_window_size_kb": 1024 00:20:10.963 } 00:20:10.963 }, 00:20:10.963 { 00:20:10.963 "method": "bdev_iscsi_set_options", 00:20:10.963 "params": { 00:20:10.963 "timeout_sec": 30 00:20:10.963 } 00:20:10.963 }, 00:20:10.963 { 00:20:10.963 "method": "bdev_nvme_set_options", 00:20:10.963 "params": { 00:20:10.963 "action_on_timeout": "none", 00:20:10.963 "timeout_us": 0, 00:20:10.963 "timeout_admin_us": 0, 00:20:10.963 "keep_alive_timeout_ms": 10000, 00:20:10.963 "arbitration_burst": 0, 00:20:10.963 "low_priority_weight": 0, 00:20:10.963 "medium_priority_weight": 0, 00:20:10.963 "high_priority_weight": 0, 00:20:10.963 "nvme_adminq_poll_period_us": 10000, 00:20:10.963 "nvme_ioq_poll_period_us": 0, 00:20:10.963 "io_queue_requests": 512, 00:20:10.963 "delay_cmd_submit": true, 00:20:10.963 "transport_retry_count": 4, 00:20:10.963 "bdev_retry_count": 3, 00:20:10.963 "transport_ack_timeout": 0, 00:20:10.963 "ctrlr_loss_timeout_sec": 0, 00:20:10.963 "reconnect_delay_sec": 0, 00:20:10.963 "fast_io_fail_timeout_sec": 0, 00:20:10.963 "disable_auto_failback": false, 00:20:10.963 "generate_uuids": false, 00:20:10.963 "transport_tos": 0, 00:20:10.963 "nvme_error_stat": false, 00:20:10.963 "rdma_srq_size": 0, 00:20:10.963 "io_path_stat": false, 00:20:10.963 "allow_accel_sequence": false, 00:20:10.963 "rdma_max_cq_size": 0, 00:20:10.963 "rdma_cm_event_timeout_ms": 0, 00:20:10.963 "dhchap_digests": [ 00:20:10.963 "sha256", 00:20:10.963 "sha384", 00:20:10.963 "sha512" 00:20:10.963 ], 00:20:10.963 "dhchap_dhgroups": [ 00:20:10.963 "null", 00:20:10.963 "ffdhe2048", 00:20:10.963 "ffdhe3072", 00:20:10.963 "ffdhe4096", 00:20:10.963 "ffdhe6144", 00:20:10.963 "ffdhe8192" 00:20:10.963 ] 00:20:10.963 } 00:20:10.963 }, 00:20:10.963 { 00:20:10.963 "method": "bdev_nvme_attach_controller", 00:20:10.963 "params": { 00:20:10.963 "name": "nvme0", 00:20:10.963 "trtype": "TCP", 00:20:10.963 "adrfam": "IPv4", 00:20:10.963 "traddr": "10.0.0.2", 00:20:10.963 "trsvcid": "4420", 00:20:10.963 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.963 "prchk_reftag": false, 00:20:10.963 "prchk_guard": false, 00:20:10.963 "ctrlr_loss_timeout_sec": 0, 00:20:10.963 "reconnect_delay_sec": 0, 00:20:10.963 "fast_io_fail_timeout_sec": 0, 00:20:10.963 "psk": "key0", 00:20:10.963 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:10.963 "hdgst": false, 00:20:10.963 "ddgst": false 00:20:10.963 } 00:20:10.963 }, 00:20:10.963 { 00:20:10.963 "method": "bdev_nvme_set_hotplug", 00:20:10.963 "params": { 00:20:10.963 "period_us": 100000, 00:20:10.963 "enable": false 00:20:10.963 } 00:20:10.963 }, 00:20:10.963 { 00:20:10.963 "method": "bdev_enable_histogram", 00:20:10.963 "params": { 00:20:10.963 "name": "nvme0n1", 00:20:10.963 "enable": true 00:20:10.963 } 00:20:10.963 }, 00:20:10.963 { 00:20:10.963 "method": "bdev_wait_for_examine" 00:20:10.963 } 00:20:10.963 ] 00:20:10.963 }, 00:20:10.963 { 00:20:10.963 "subsystem": "nbd", 00:20:10.963 "config": [] 00:20:10.963 } 00:20:10.963 ] 00:20:10.963 }' 00:20:10.963 13:59:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.963 [2024-07-15 13:59:05.684472] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:20:10.963 [2024-07-15 13:59:05.684559] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3782903 ] 00:20:10.963 EAL: No free 2048 kB hugepages reported on node 1 00:20:10.963 [2024-07-15 13:59:05.747196] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.221 [2024-07-15 13:59:05.864130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.221 [2024-07-15 13:59:06.046145] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:12.155 13:59:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:12.155 13:59:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:12.155 13:59:06 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:20:12.155 13:59:06 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:12.155 13:59:06 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.155 13:59:06 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:12.418 Running I/O for 1 seconds... 00:20:13.349 00:20:13.349 Latency(us) 00:20:13.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.349 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:13.349 Verification LBA range: start 0x0 length 0x2000 00:20:13.349 nvme0n1 : 1.02 3539.21 13.83 0.00 0.00 35837.09 7524.50 49516.09 00:20:13.349 =================================================================================================================== 00:20:13.349 Total : 3539.21 13.83 0.00 0.00 35837.09 7524.50 49516.09 00:20:13.349 0 00:20:13.349 13:59:08 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:20:13.349 13:59:08 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:20:13.349 13:59:08 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:13.349 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:20:13.349 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:20:13.349 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:13.349 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:13.349 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:13.349 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:13.349 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:13.349 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:13.349 nvmf_trace.0 00:20:13.349 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:20:13.349 13:59:08 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3782903 00:20:13.349 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3782903 ']' 00:20:13.349 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3782903 00:20:13.349 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:13.350 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:13.350 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3782903 00:20:13.350 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:13.350 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:13.609 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3782903' 00:20:13.609 killing process with pid 3782903 00:20:13.609 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3782903 00:20:13.609 Received shutdown signal, test time was about 1.000000 seconds 00:20:13.609 00:20:13.609 Latency(us) 00:20:13.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.609 =================================================================================================================== 00:20:13.609 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:13.609 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3782903 00:20:13.869 13:59:08 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:13.869 13:59:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:13.869 13:59:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:13.869 13:59:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:13.869 13:59:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:13.869 13:59:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:13.869 13:59:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:13.869 rmmod nvme_tcp 00:20:13.869 rmmod nvme_fabrics 00:20:13.869 rmmod nvme_keyring 00:20:13.869 13:59:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:13.869 13:59:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:13.869 13:59:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:13.869 13:59:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3782754 ']' 00:20:13.869 13:59:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3782754 00:20:13.869 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3782754 ']' 00:20:13.869 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3782754 00:20:13.869 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:13.869 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:13.869 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3782754 00:20:13.869 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:13.869 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:13.869 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3782754' 00:20:13.869 killing process with pid 3782754 00:20:13.869 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3782754 00:20:13.869 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3782754 00:20:14.130 13:59:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:14.130 13:59:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:14.130 13:59:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:14.130 13:59:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:14.130 13:59:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:14.130 13:59:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.130 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:14.130 13:59:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.034 13:59:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:16.034 13:59:10 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Ca5xJnJ5bC /tmp/tmp.iDIJMhKbAr /tmp/tmp.99HwghmHBL 00:20:16.034 00:20:16.034 real 1m20.164s 00:20:16.034 user 2m5.931s 00:20:16.034 sys 0m30.275s 00:20:16.034 13:59:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:16.034 13:59:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.034 ************************************ 00:20:16.034 END TEST nvmf_tls 00:20:16.034 ************************************ 00:20:16.292 13:59:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:16.292 13:59:10 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:16.292 13:59:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:16.292 13:59:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:16.292 13:59:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:16.292 ************************************ 00:20:16.292 START TEST nvmf_fips 00:20:16.292 ************************************ 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:16.292 * Looking for test storage... 00:20:16.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:16.292 13:59:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:16.293 13:59:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:20:16.293 Error setting digest 00:20:16.293 00B2C2A4237F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:16.293 00B2C2A4237F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:20:16.293 13:59:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:18.199 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:18.199 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:18.199 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:18.200 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:18.200 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.200 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.200 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:18.200 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.200 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:18.200 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:18.200 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.200 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:18.200 Found net devices under 0000:84:00.0: cvl_0_0 00:20:18.200 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.200 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.200 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.200 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:18.200 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.200 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:18.200 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:18.200 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:18.458 Found net devices under 0000:84:00.1: cvl_0_1 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:18.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:18.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:20:18.458 00:20:18.458 --- 10.0.0.2 ping statistics --- 00:20:18.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.458 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:18.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:18.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:20:18.458 00:20:18.458 --- 10.0.0.1 ping statistics --- 00:20:18.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.458 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:18.458 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:18.459 13:59:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:18.459 13:59:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:18.459 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3785272 00:20:18.459 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:18.459 13:59:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3785272 00:20:18.459 13:59:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3785272 ']' 00:20:18.459 13:59:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.459 13:59:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:18.459 13:59:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.459 13:59:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:18.459 13:59:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:18.459 [2024-07-15 13:59:13.270974] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:20:18.459 [2024-07-15 13:59:13.271060] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.717 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.717 [2024-07-15 13:59:13.331621] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.717 [2024-07-15 13:59:13.432195] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.717 [2024-07-15 13:59:13.432254] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.717 [2024-07-15 13:59:13.432281] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:18.717 [2024-07-15 13:59:13.432292] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:18.717 [2024-07-15 13:59:13.432301] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.717 [2024-07-15 13:59:13.432327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.651 13:59:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:19.651 13:59:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:19.651 13:59:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:19.651 13:59:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:19.651 13:59:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:19.651 13:59:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.651 13:59:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:19.651 13:59:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:19.651 13:59:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:19.651 13:59:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:19.651 13:59:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:19.651 13:59:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:19.651 13:59:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:19.651 13:59:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:19.651 [2024-07-15 13:59:14.466881] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.651 [2024-07-15 13:59:14.482870] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:19.651 [2024-07-15 13:59:14.483065] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.909 [2024-07-15 13:59:14.513561] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:19.910 malloc0 00:20:19.910 13:59:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:19.910 13:59:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3785429 00:20:19.910 13:59:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:19.910 13:59:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3785429 /var/tmp/bdevperf.sock 00:20:19.910 13:59:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3785429 ']' 00:20:19.910 13:59:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:19.910 13:59:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:19.910 13:59:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:19.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:19.910 13:59:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:19.910 13:59:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:19.910 [2024-07-15 13:59:14.598187] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:20:19.910 [2024-07-15 13:59:14.598263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3785429 ] 00:20:19.910 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.910 [2024-07-15 13:59:14.656766] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.168 [2024-07-15 13:59:14.766238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.736 13:59:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:20.736 13:59:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:20.736 13:59:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:21.302 [2024-07-15 13:59:15.844195] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:21.302 [2024-07-15 13:59:15.844329] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:21.302 TLSTESTn1 00:20:21.302 13:59:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:21.302 Running I/O for 10 seconds... 00:20:31.284 00:20:31.284 Latency(us) 00:20:31.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.284 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:31.284 Verification LBA range: start 0x0 length 0x2000 00:20:31.284 TLSTESTn1 : 10.03 2877.34 11.24 0.00 0.00 44410.26 5873.97 59807.67 00:20:31.284 =================================================================================================================== 00:20:31.284 Total : 2877.34 11.24 0.00 0.00 44410.26 5873.97 59807.67 00:20:31.284 0 00:20:31.284 13:59:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:31.284 13:59:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:31.284 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:20:31.284 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:20:31.284 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:31.284 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:31.284 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:31.284 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:31.284 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:31.285 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:31.285 nvmf_trace.0 00:20:31.543 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:20:31.543 13:59:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3785429 00:20:31.543 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3785429 ']' 00:20:31.543 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3785429 00:20:31.543 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:31.544 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:31.544 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3785429 00:20:31.544 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:31.544 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:31.544 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3785429' 00:20:31.544 killing process with pid 3785429 00:20:31.544 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3785429 00:20:31.544 Received shutdown signal, test time was about 10.000000 seconds 00:20:31.544 00:20:31.544 Latency(us) 00:20:31.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.544 =================================================================================================================== 00:20:31.544 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:31.544 [2024-07-15 13:59:26.207547] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:31.544 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3785429 00:20:31.802 13:59:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:31.802 13:59:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:31.802 13:59:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:31.802 13:59:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:31.802 13:59:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:31.802 13:59:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:31.802 13:59:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:31.802 rmmod nvme_tcp 00:20:31.802 rmmod nvme_fabrics 00:20:31.802 rmmod nvme_keyring 00:20:31.802 13:59:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:31.802 13:59:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:31.802 13:59:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:31.802 13:59:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3785272 ']' 00:20:31.802 13:59:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3785272 00:20:31.802 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3785272 ']' 00:20:31.802 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3785272 00:20:31.802 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:31.802 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:31.802 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3785272 00:20:31.802 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:31.802 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:31.802 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3785272' 00:20:31.802 killing process with pid 3785272 00:20:31.802 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3785272 00:20:31.802 [2024-07-15 13:59:26.555448] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:31.802 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3785272 00:20:32.059 13:59:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:32.059 13:59:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:32.059 13:59:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:32.059 13:59:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:32.059 13:59:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:32.059 13:59:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.059 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:32.059 13:59:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.597 13:59:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:34.597 13:59:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:34.597 00:20:34.597 real 0m17.972s 00:20:34.597 user 0m21.572s 00:20:34.597 sys 0m8.041s 00:20:34.597 13:59:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:34.597 13:59:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:34.597 ************************************ 00:20:34.597 END TEST nvmf_fips 00:20:34.597 ************************************ 00:20:34.597 13:59:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:34.597 13:59:28 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:20:34.597 13:59:28 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:20:34.597 13:59:28 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:20:34.597 13:59:28 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:20:34.597 13:59:28 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:20:34.597 13:59:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:36.502 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:36.502 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:36.502 Found net devices under 0000:84:00.0: cvl_0_0 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:36.502 Found net devices under 0000:84:00.1: cvl_0_1 00:20:36.502 13:59:30 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.502 13:59:31 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:36.502 13:59:31 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:36.502 13:59:31 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:20:36.502 13:59:31 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:36.502 13:59:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:36.502 13:59:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:36.502 13:59:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:36.502 ************************************ 00:20:36.502 START TEST nvmf_perf_adq 00:20:36.502 ************************************ 00:20:36.502 13:59:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:36.502 * Looking for test storage... 00:20:36.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:36.502 13:59:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:36.502 13:59:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:36.502 13:59:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:36.502 13:59:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:36.502 13:59:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:36.502 13:59:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:36.502 13:59:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:36.502 13:59:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:36.502 13:59:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:36.502 13:59:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:36.502 13:59:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:36.502 13:59:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:36.502 13:59:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:36.502 13:59:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:36.502 13:59:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:36.502 13:59:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:36.502 13:59:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:36.502 13:59:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:36.502 13:59:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:36.502 13:59:31 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:36.502 13:59:31 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:36.502 13:59:31 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:36.502 13:59:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.502 13:59:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.502 13:59:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.502 13:59:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:36.503 13:59:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.503 13:59:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:20:36.503 13:59:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:36.503 13:59:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:36.503 13:59:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:36.503 13:59:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:36.503 13:59:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:36.503 13:59:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:36.503 13:59:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:36.503 13:59:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:36.503 13:59:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:36.503 13:59:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:36.503 13:59:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:38.402 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:38.402 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:38.402 Found net devices under 0000:84:00.0: cvl_0_0 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:38.402 Found net devices under 0000:84:00.1: cvl_0_1 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:20:38.402 13:59:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:39.338 13:59:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:41.304 13:59:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:46.579 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:46.579 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:46.579 Found net devices under 0000:84:00.0: cvl_0_0 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:46.579 Found net devices under 0000:84:00.1: cvl_0_1 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:46.579 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:46.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:46.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:20:46.580 00:20:46.580 --- 10.0.0.2 ping statistics --- 00:20:46.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.580 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:46.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:46.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:20:46.580 00:20:46.580 --- 10.0.0.1 ping statistics --- 00:20:46.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.580 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3791339 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3791339 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3791339 ']' 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:46.580 13:59:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.580 [2024-07-15 13:59:40.997668] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:20:46.580 [2024-07-15 13:59:40.997772] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.580 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.580 [2024-07-15 13:59:41.062932] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:46.580 [2024-07-15 13:59:41.175088] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.580 [2024-07-15 13:59:41.175140] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.580 [2024-07-15 13:59:41.175155] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.580 [2024-07-15 13:59:41.175168] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.580 [2024-07-15 13:59:41.175178] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.580 [2024-07-15 13:59:41.175283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.580 [2024-07-15 13:59:41.175372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.580 [2024-07-15 13:59:41.175483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:46.580 [2024-07-15 13:59:41.175488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.580 13:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:46.580 13:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:20:46.580 13:59:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:46.580 13:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:46.580 13:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.580 13:59:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.580 13:59:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:20:46.580 13:59:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:46.580 13:59:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:46.580 13:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.580 13:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.580 13:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.580 13:59:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:46.580 13:59:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:46.580 13:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.580 13:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.580 13:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.580 13:59:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:46.580 13:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.580 13:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.580 13:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.580 13:59:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:46.580 13:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.580 13:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.580 [2024-07-15 13:59:41.403392] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.580 13:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.580 13:59:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:46.580 13:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.580 13:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.838 Malloc1 00:20:46.838 13:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.838 13:59:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:46.838 13:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.838 13:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.838 13:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.838 13:59:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:46.839 13:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.839 13:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.839 13:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.839 13:59:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:46.839 13:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.839 13:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.839 [2024-07-15 13:59:41.454644] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.839 13:59:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.839 13:59:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3791494 00:20:46.839 13:59:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:20:46.839 13:59:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:46.839 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.739 13:59:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:48.739 13:59:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.739 13:59:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:48.739 13:59:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.739 13:59:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:20:48.739 "tick_rate": 2700000000, 00:20:48.739 "poll_groups": [ 00:20:48.739 { 00:20:48.739 "name": "nvmf_tgt_poll_group_000", 00:20:48.739 "admin_qpairs": 1, 00:20:48.739 "io_qpairs": 1, 00:20:48.739 "current_admin_qpairs": 1, 00:20:48.739 "current_io_qpairs": 1, 00:20:48.739 "pending_bdev_io": 0, 00:20:48.739 "completed_nvme_io": 20505, 00:20:48.739 "transports": [ 00:20:48.739 { 00:20:48.739 "trtype": "TCP" 00:20:48.739 } 00:20:48.739 ] 00:20:48.739 }, 00:20:48.739 { 00:20:48.739 "name": "nvmf_tgt_poll_group_001", 00:20:48.739 "admin_qpairs": 0, 00:20:48.739 "io_qpairs": 1, 00:20:48.739 "current_admin_qpairs": 0, 00:20:48.739 "current_io_qpairs": 1, 00:20:48.739 "pending_bdev_io": 0, 00:20:48.739 "completed_nvme_io": 20869, 00:20:48.739 "transports": [ 00:20:48.739 { 00:20:48.739 "trtype": "TCP" 00:20:48.739 } 00:20:48.739 ] 00:20:48.739 }, 00:20:48.739 { 00:20:48.739 "name": "nvmf_tgt_poll_group_002", 00:20:48.739 "admin_qpairs": 0, 00:20:48.739 "io_qpairs": 1, 00:20:48.739 "current_admin_qpairs": 0, 00:20:48.739 "current_io_qpairs": 1, 00:20:48.739 "pending_bdev_io": 0, 00:20:48.739 "completed_nvme_io": 21007, 00:20:48.739 "transports": [ 00:20:48.739 { 00:20:48.739 "trtype": "TCP" 00:20:48.739 } 00:20:48.739 ] 00:20:48.739 }, 00:20:48.739 { 00:20:48.739 "name": "nvmf_tgt_poll_group_003", 00:20:48.739 "admin_qpairs": 0, 00:20:48.739 "io_qpairs": 1, 00:20:48.739 "current_admin_qpairs": 0, 00:20:48.739 "current_io_qpairs": 1, 00:20:48.739 "pending_bdev_io": 0, 00:20:48.739 "completed_nvme_io": 20742, 00:20:48.739 "transports": [ 00:20:48.739 { 00:20:48.739 "trtype": "TCP" 00:20:48.739 } 00:20:48.739 ] 00:20:48.739 } 00:20:48.739 ] 00:20:48.739 }' 00:20:48.739 13:59:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:48.739 13:59:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:20:48.739 13:59:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:20:48.739 13:59:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:20:48.739 13:59:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3791494 00:20:56.851 Initializing NVMe Controllers 00:20:56.851 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:56.851 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:56.851 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:56.851 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:56.851 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:56.851 Initialization complete. Launching workers. 00:20:56.851 ======================================================== 00:20:56.851 Latency(us) 00:20:56.851 Device Information : IOPS MiB/s Average min max 00:20:56.851 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10564.70 41.27 6058.22 2863.87 8442.93 00:20:56.851 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10701.50 41.80 5982.44 2496.30 8664.07 00:20:56.851 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10853.40 42.40 5896.46 2457.19 8470.80 00:20:56.851 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10584.70 41.35 6047.78 2837.97 9776.14 00:20:56.851 ======================================================== 00:20:56.851 Total : 42704.30 166.81 5995.53 2457.19 9776.14 00:20:56.851 00:20:56.851 13:59:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:20:56.851 13:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:56.851 13:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:56.851 13:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:56.851 13:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:56.851 13:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:56.851 13:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:56.851 rmmod nvme_tcp 00:20:56.851 rmmod nvme_fabrics 00:20:56.851 rmmod nvme_keyring 00:20:56.851 13:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:56.851 13:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:56.851 13:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:56.851 13:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3791339 ']' 00:20:56.851 13:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3791339 00:20:56.851 13:59:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3791339 ']' 00:20:56.851 13:59:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3791339 00:20:56.851 13:59:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:20:56.851 13:59:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:56.851 13:59:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3791339 00:20:56.851 13:59:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:56.851 13:59:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:56.851 13:59:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3791339' 00:20:56.851 killing process with pid 3791339 00:20:56.851 13:59:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3791339 00:20:56.851 13:59:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3791339 00:20:57.109 13:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:57.109 13:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:57.109 13:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:57.109 13:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:57.109 13:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:57.109 13:59:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.109 13:59:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:57.109 13:59:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.643 13:59:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:59.643 13:59:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:20:59.643 13:59:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:59.901 13:59:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:02.429 13:59:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:07.699 14:00:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:21:07.699 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:07.700 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:07.700 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:07.700 Found net devices under 0000:84:00.0: cvl_0_0 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:07.700 Found net devices under 0000:84:00.1: cvl_0_1 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:07.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:07.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:21:07.700 00:21:07.700 --- 10.0.0.2 ping statistics --- 00:21:07.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.700 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:07.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:07.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:21:07.700 00:21:07.700 --- 10.0.0.1 ping statistics --- 00:21:07.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.700 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:07.700 net.core.busy_poll = 1 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:07.700 net.core.busy_read = 1 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:07.700 14:00:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:07.700 14:00:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:07.700 14:00:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:07.700 14:00:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:07.700 14:00:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:07.700 14:00:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.700 14:00:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3794225 00:21:07.700 14:00:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:07.700 14:00:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3794225 00:21:07.700 14:00:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3794225 ']' 00:21:07.700 14:00:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.700 14:00:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:07.700 14:00:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.700 14:00:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:07.700 14:00:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.700 [2024-07-15 14:00:02.088160] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:21:07.700 [2024-07-15 14:00:02.088238] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.700 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.700 [2024-07-15 14:00:02.154762] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:07.700 [2024-07-15 14:00:02.266064] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.700 [2024-07-15 14:00:02.266115] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.700 [2024-07-15 14:00:02.266143] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.700 [2024-07-15 14:00:02.266154] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.700 [2024-07-15 14:00:02.266164] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.700 [2024-07-15 14:00:02.266247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.700 [2024-07-15 14:00:02.266312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.700 [2024-07-15 14:00:02.266419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:07.700 [2024-07-15 14:00:02.266427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.267 14:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:08.267 14:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:21:08.267 14:00:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:08.267 14:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:08.267 14:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.267 14:00:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.267 14:00:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:21:08.267 14:00:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:08.267 14:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.267 14:00:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:08.267 14:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.267 14:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.267 14:00:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:08.267 14:00:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:08.267 14:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.267 14:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.267 14:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.267 14:00:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:08.267 14:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.267 14:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.534 14:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.534 14:00:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:08.534 14:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.534 14:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.534 [2024-07-15 14:00:03.200878] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:08.534 14:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.534 14:00:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:08.534 14:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.534 14:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.534 Malloc1 00:21:08.534 14:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.534 14:00:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:08.534 14:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.534 14:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.534 14:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.534 14:00:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:08.534 14:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.534 14:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.534 14:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.534 14:00:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:08.534 14:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.534 14:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.534 [2024-07-15 14:00:03.253708] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:08.534 14:00:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.534 14:00:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3794393 00:21:08.534 14:00:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:08.534 14:00:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:21:08.534 EAL: No free 2048 kB hugepages reported on node 1 00:21:10.436 14:00:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:21:10.436 14:00:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.436 14:00:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:10.436 14:00:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.694 14:00:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:21:10.694 "tick_rate": 2700000000, 00:21:10.694 "poll_groups": [ 00:21:10.694 { 00:21:10.694 "name": "nvmf_tgt_poll_group_000", 00:21:10.694 "admin_qpairs": 1, 00:21:10.694 "io_qpairs": 2, 00:21:10.694 "current_admin_qpairs": 1, 00:21:10.694 "current_io_qpairs": 2, 00:21:10.694 "pending_bdev_io": 0, 00:21:10.694 "completed_nvme_io": 25686, 00:21:10.694 "transports": [ 00:21:10.694 { 00:21:10.694 "trtype": "TCP" 00:21:10.694 } 00:21:10.694 ] 00:21:10.694 }, 00:21:10.694 { 00:21:10.694 "name": "nvmf_tgt_poll_group_001", 00:21:10.694 "admin_qpairs": 0, 00:21:10.694 "io_qpairs": 2, 00:21:10.694 "current_admin_qpairs": 0, 00:21:10.694 "current_io_qpairs": 2, 00:21:10.694 "pending_bdev_io": 0, 00:21:10.694 "completed_nvme_io": 26053, 00:21:10.694 "transports": [ 00:21:10.694 { 00:21:10.694 "trtype": "TCP" 00:21:10.694 } 00:21:10.694 ] 00:21:10.694 }, 00:21:10.694 { 00:21:10.694 "name": "nvmf_tgt_poll_group_002", 00:21:10.694 "admin_qpairs": 0, 00:21:10.694 "io_qpairs": 0, 00:21:10.694 "current_admin_qpairs": 0, 00:21:10.694 "current_io_qpairs": 0, 00:21:10.694 "pending_bdev_io": 0, 00:21:10.694 "completed_nvme_io": 0, 00:21:10.694 "transports": [ 00:21:10.694 { 00:21:10.694 "trtype": "TCP" 00:21:10.694 } 00:21:10.694 ] 00:21:10.694 }, 00:21:10.694 { 00:21:10.694 "name": "nvmf_tgt_poll_group_003", 00:21:10.694 "admin_qpairs": 0, 00:21:10.694 "io_qpairs": 0, 00:21:10.694 "current_admin_qpairs": 0, 00:21:10.694 "current_io_qpairs": 0, 00:21:10.694 "pending_bdev_io": 0, 00:21:10.694 "completed_nvme_io": 0, 00:21:10.694 "transports": [ 00:21:10.694 { 00:21:10.694 "trtype": "TCP" 00:21:10.694 } 00:21:10.694 ] 00:21:10.694 } 00:21:10.694 ] 00:21:10.694 }' 00:21:10.694 14:00:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:10.694 14:00:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:21:10.694 14:00:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:21:10.694 14:00:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:21:10.694 14:00:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3794393 00:21:18.802 Initializing NVMe Controllers 00:21:18.802 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:18.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:18.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:18.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:18.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:18.802 Initialization complete. Launching workers. 00:21:18.802 ======================================================== 00:21:18.802 Latency(us) 00:21:18.802 Device Information : IOPS MiB/s Average min max 00:21:18.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6453.77 25.21 9919.91 1672.19 54497.57 00:21:18.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7132.87 27.86 8975.64 1282.03 54127.91 00:21:18.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7888.47 30.81 8112.83 1618.63 53571.02 00:21:18.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5771.17 22.54 11131.84 1889.01 54579.39 00:21:18.802 ======================================================== 00:21:18.802 Total : 27246.28 106.43 9406.22 1282.03 54579.39 00:21:18.802 00:21:18.802 14:00:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:21:18.802 14:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:18.802 14:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:18.802 14:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:18.802 14:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:18.802 14:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:18.802 14:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:18.802 rmmod nvme_tcp 00:21:18.802 rmmod nvme_fabrics 00:21:18.802 rmmod nvme_keyring 00:21:18.802 14:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:18.802 14:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:18.802 14:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:18.802 14:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3794225 ']' 00:21:18.802 14:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3794225 00:21:18.802 14:00:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3794225 ']' 00:21:18.802 14:00:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3794225 00:21:18.802 14:00:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:21:18.802 14:00:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:18.802 14:00:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3794225 00:21:18.802 14:00:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:18.802 14:00:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:18.802 14:00:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3794225' 00:21:18.802 killing process with pid 3794225 00:21:18.802 14:00:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3794225 00:21:18.802 14:00:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3794225 00:21:19.062 14:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:19.062 14:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:19.062 14:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:19.062 14:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:19.062 14:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:19.062 14:00:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.062 14:00:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:19.062 14:00:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.353 14:00:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:22.353 14:00:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:21:22.353 00:21:22.353 real 0m45.798s 00:21:22.353 user 2m42.659s 00:21:22.353 sys 0m9.983s 00:21:22.353 14:00:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:22.353 14:00:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:22.353 ************************************ 00:21:22.353 END TEST nvmf_perf_adq 00:21:22.353 ************************************ 00:21:22.353 14:00:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:22.353 14:00:16 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:22.353 14:00:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:22.353 14:00:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:22.353 14:00:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:22.353 ************************************ 00:21:22.353 START TEST nvmf_shutdown 00:21:22.353 ************************************ 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:22.353 * Looking for test storage... 00:21:22.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.353 14:00:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:22.354 14:00:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:22.354 14:00:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:22.354 14:00:16 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:22.354 14:00:16 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:22.354 14:00:16 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:22.354 14:00:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:22.354 14:00:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:22.354 14:00:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:22.354 ************************************ 00:21:22.354 START TEST nvmf_shutdown_tc1 00:21:22.354 ************************************ 00:21:22.354 14:00:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:21:22.354 14:00:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:21:22.354 14:00:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:22.354 14:00:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:22.354 14:00:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:22.354 14:00:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:22.354 14:00:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:22.354 14:00:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:22.354 14:00:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.354 14:00:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:22.354 14:00:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.354 14:00:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:22.354 14:00:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:22.354 14:00:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:22.354 14:00:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:24.293 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:24.293 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:24.293 Found net devices under 0000:84:00.0: cvl_0_0 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:24.293 Found net devices under 0000:84:00.1: cvl_0_1 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:24.293 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:24.294 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:24.294 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:24.294 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:24.294 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:24.294 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:24.294 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:24.294 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:24.294 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:24.294 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:24.294 14:00:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:24.294 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:24.294 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:24.294 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:24.294 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:24.294 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:24.294 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:24.294 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:24.294 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:24.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:24.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:21:24.294 00:21:24.294 --- 10.0.0.2 ping statistics --- 00:21:24.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.294 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:21:24.294 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:24.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:24.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:21:24.294 00:21:24.294 --- 10.0.0.1 ping statistics --- 00:21:24.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.294 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:21:24.294 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:24.294 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:21:24.294 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:24.294 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:24.294 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:24.294 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:24.294 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:24.294 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:24.294 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:24.553 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:24.553 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:24.553 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:24.553 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:24.553 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3798199 00:21:24.553 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:24.553 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3798199 00:21:24.553 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3798199 ']' 00:21:24.553 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.553 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:24.553 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.553 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:24.553 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:24.553 [2024-07-15 14:00:19.206798] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:21:24.553 [2024-07-15 14:00:19.206902] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.553 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.553 [2024-07-15 14:00:19.271530] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:24.553 [2024-07-15 14:00:19.384177] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.553 [2024-07-15 14:00:19.384233] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.553 [2024-07-15 14:00:19.384262] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:24.553 [2024-07-15 14:00:19.384273] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:24.553 [2024-07-15 14:00:19.384283] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.553 [2024-07-15 14:00:19.384376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.553 [2024-07-15 14:00:19.384438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:24.553 [2024-07-15 14:00:19.384505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:24.553 [2024-07-15 14:00:19.384507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:24.812 [2024-07-15 14:00:19.539642] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.812 14:00:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:24.813 Malloc1 00:21:24.813 [2024-07-15 14:00:19.623347] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.813 Malloc2 00:21:25.072 Malloc3 00:21:25.072 Malloc4 00:21:25.072 Malloc5 00:21:25.072 Malloc6 00:21:25.072 Malloc7 00:21:25.331 Malloc8 00:21:25.331 Malloc9 00:21:25.331 Malloc10 00:21:25.331 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.331 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:25.331 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:25.331 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:25.331 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3798380 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3798380 /var/tmp/bdevperf.sock 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3798380 ']' 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:25.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.332 { 00:21:25.332 "params": { 00:21:25.332 "name": "Nvme$subsystem", 00:21:25.332 "trtype": "$TEST_TRANSPORT", 00:21:25.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.332 "adrfam": "ipv4", 00:21:25.332 "trsvcid": "$NVMF_PORT", 00:21:25.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.332 "hdgst": ${hdgst:-false}, 00:21:25.332 "ddgst": ${ddgst:-false} 00:21:25.332 }, 00:21:25.332 "method": "bdev_nvme_attach_controller" 00:21:25.332 } 00:21:25.332 EOF 00:21:25.332 )") 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.332 { 00:21:25.332 "params": { 00:21:25.332 "name": "Nvme$subsystem", 00:21:25.332 "trtype": "$TEST_TRANSPORT", 00:21:25.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.332 "adrfam": "ipv4", 00:21:25.332 "trsvcid": "$NVMF_PORT", 00:21:25.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.332 "hdgst": ${hdgst:-false}, 00:21:25.332 "ddgst": ${ddgst:-false} 00:21:25.332 }, 00:21:25.332 "method": "bdev_nvme_attach_controller" 00:21:25.332 } 00:21:25.332 EOF 00:21:25.332 )") 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.332 { 00:21:25.332 "params": { 00:21:25.332 "name": "Nvme$subsystem", 00:21:25.332 "trtype": "$TEST_TRANSPORT", 00:21:25.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.332 "adrfam": "ipv4", 00:21:25.332 "trsvcid": "$NVMF_PORT", 00:21:25.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.332 "hdgst": ${hdgst:-false}, 00:21:25.332 "ddgst": ${ddgst:-false} 00:21:25.332 }, 00:21:25.332 "method": "bdev_nvme_attach_controller" 00:21:25.332 } 00:21:25.332 EOF 00:21:25.332 )") 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.332 { 00:21:25.332 "params": { 00:21:25.332 "name": "Nvme$subsystem", 00:21:25.332 "trtype": "$TEST_TRANSPORT", 00:21:25.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.332 "adrfam": "ipv4", 00:21:25.332 "trsvcid": "$NVMF_PORT", 00:21:25.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.332 "hdgst": ${hdgst:-false}, 00:21:25.332 "ddgst": ${ddgst:-false} 00:21:25.332 }, 00:21:25.332 "method": "bdev_nvme_attach_controller" 00:21:25.332 } 00:21:25.332 EOF 00:21:25.332 )") 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.332 { 00:21:25.332 "params": { 00:21:25.332 "name": "Nvme$subsystem", 00:21:25.332 "trtype": "$TEST_TRANSPORT", 00:21:25.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.332 "adrfam": "ipv4", 00:21:25.332 "trsvcid": "$NVMF_PORT", 00:21:25.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.332 "hdgst": ${hdgst:-false}, 00:21:25.332 "ddgst": ${ddgst:-false} 00:21:25.332 }, 00:21:25.332 "method": "bdev_nvme_attach_controller" 00:21:25.332 } 00:21:25.332 EOF 00:21:25.332 )") 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.332 { 00:21:25.332 "params": { 00:21:25.332 "name": "Nvme$subsystem", 00:21:25.332 "trtype": "$TEST_TRANSPORT", 00:21:25.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.332 "adrfam": "ipv4", 00:21:25.332 "trsvcid": "$NVMF_PORT", 00:21:25.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.332 "hdgst": ${hdgst:-false}, 00:21:25.332 "ddgst": ${ddgst:-false} 00:21:25.332 }, 00:21:25.332 "method": "bdev_nvme_attach_controller" 00:21:25.332 } 00:21:25.332 EOF 00:21:25.332 )") 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.332 { 00:21:25.332 "params": { 00:21:25.332 "name": "Nvme$subsystem", 00:21:25.332 "trtype": "$TEST_TRANSPORT", 00:21:25.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.332 "adrfam": "ipv4", 00:21:25.332 "trsvcid": "$NVMF_PORT", 00:21:25.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.332 "hdgst": ${hdgst:-false}, 00:21:25.332 "ddgst": ${ddgst:-false} 00:21:25.332 }, 00:21:25.332 "method": "bdev_nvme_attach_controller" 00:21:25.332 } 00:21:25.332 EOF 00:21:25.332 )") 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.332 { 00:21:25.332 "params": { 00:21:25.332 "name": "Nvme$subsystem", 00:21:25.332 "trtype": "$TEST_TRANSPORT", 00:21:25.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.332 "adrfam": "ipv4", 00:21:25.332 "trsvcid": "$NVMF_PORT", 00:21:25.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.332 "hdgst": ${hdgst:-false}, 00:21:25.332 "ddgst": ${ddgst:-false} 00:21:25.332 }, 00:21:25.332 "method": "bdev_nvme_attach_controller" 00:21:25.332 } 00:21:25.332 EOF 00:21:25.332 )") 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.332 { 00:21:25.332 "params": { 00:21:25.332 "name": "Nvme$subsystem", 00:21:25.332 "trtype": "$TEST_TRANSPORT", 00:21:25.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.332 "adrfam": "ipv4", 00:21:25.332 "trsvcid": "$NVMF_PORT", 00:21:25.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.332 "hdgst": ${hdgst:-false}, 00:21:25.332 "ddgst": ${ddgst:-false} 00:21:25.332 }, 00:21:25.332 "method": "bdev_nvme_attach_controller" 00:21:25.332 } 00:21:25.332 EOF 00:21:25.332 )") 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.332 { 00:21:25.332 "params": { 00:21:25.332 "name": "Nvme$subsystem", 00:21:25.332 "trtype": "$TEST_TRANSPORT", 00:21:25.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.332 "adrfam": "ipv4", 00:21:25.332 "trsvcid": "$NVMF_PORT", 00:21:25.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.332 "hdgst": ${hdgst:-false}, 00:21:25.332 "ddgst": ${ddgst:-false} 00:21:25.332 }, 00:21:25.332 "method": "bdev_nvme_attach_controller" 00:21:25.332 } 00:21:25.332 EOF 00:21:25.332 )") 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:25.332 14:00:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:25.332 "params": { 00:21:25.332 "name": "Nvme1", 00:21:25.332 "trtype": "tcp", 00:21:25.332 "traddr": "10.0.0.2", 00:21:25.332 "adrfam": "ipv4", 00:21:25.332 "trsvcid": "4420", 00:21:25.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:25.332 "hdgst": false, 00:21:25.333 "ddgst": false 00:21:25.333 }, 00:21:25.333 "method": "bdev_nvme_attach_controller" 00:21:25.333 },{ 00:21:25.333 "params": { 00:21:25.333 "name": "Nvme2", 00:21:25.333 "trtype": "tcp", 00:21:25.333 "traddr": "10.0.0.2", 00:21:25.333 "adrfam": "ipv4", 00:21:25.333 "trsvcid": "4420", 00:21:25.333 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:25.333 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:25.333 "hdgst": false, 00:21:25.333 "ddgst": false 00:21:25.333 }, 00:21:25.333 "method": "bdev_nvme_attach_controller" 00:21:25.333 },{ 00:21:25.333 "params": { 00:21:25.333 "name": "Nvme3", 00:21:25.333 "trtype": "tcp", 00:21:25.333 "traddr": "10.0.0.2", 00:21:25.333 "adrfam": "ipv4", 00:21:25.333 "trsvcid": "4420", 00:21:25.333 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:25.333 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:25.333 "hdgst": false, 00:21:25.333 "ddgst": false 00:21:25.333 }, 00:21:25.333 "method": "bdev_nvme_attach_controller" 00:21:25.333 },{ 00:21:25.333 "params": { 00:21:25.333 "name": "Nvme4", 00:21:25.333 "trtype": "tcp", 00:21:25.333 "traddr": "10.0.0.2", 00:21:25.333 "adrfam": "ipv4", 00:21:25.333 "trsvcid": "4420", 00:21:25.333 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:25.333 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:25.333 "hdgst": false, 00:21:25.333 "ddgst": false 00:21:25.333 }, 00:21:25.333 "method": "bdev_nvme_attach_controller" 00:21:25.333 },{ 00:21:25.333 "params": { 00:21:25.333 "name": "Nvme5", 00:21:25.333 "trtype": "tcp", 00:21:25.333 "traddr": "10.0.0.2", 00:21:25.333 "adrfam": "ipv4", 00:21:25.333 "trsvcid": "4420", 00:21:25.333 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:25.333 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:25.333 "hdgst": false, 00:21:25.333 "ddgst": false 00:21:25.333 }, 00:21:25.333 "method": "bdev_nvme_attach_controller" 00:21:25.333 },{ 00:21:25.333 "params": { 00:21:25.333 "name": "Nvme6", 00:21:25.333 "trtype": "tcp", 00:21:25.333 "traddr": "10.0.0.2", 00:21:25.333 "adrfam": "ipv4", 00:21:25.333 "trsvcid": "4420", 00:21:25.333 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:25.333 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:25.333 "hdgst": false, 00:21:25.333 "ddgst": false 00:21:25.333 }, 00:21:25.333 "method": "bdev_nvme_attach_controller" 00:21:25.333 },{ 00:21:25.333 "params": { 00:21:25.333 "name": "Nvme7", 00:21:25.333 "trtype": "tcp", 00:21:25.333 "traddr": "10.0.0.2", 00:21:25.333 "adrfam": "ipv4", 00:21:25.333 "trsvcid": "4420", 00:21:25.333 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:25.333 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:25.333 "hdgst": false, 00:21:25.333 "ddgst": false 00:21:25.333 }, 00:21:25.333 "method": "bdev_nvme_attach_controller" 00:21:25.333 },{ 00:21:25.333 "params": { 00:21:25.333 "name": "Nvme8", 00:21:25.333 "trtype": "tcp", 00:21:25.333 "traddr": "10.0.0.2", 00:21:25.333 "adrfam": "ipv4", 00:21:25.333 "trsvcid": "4420", 00:21:25.333 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:25.333 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:25.333 "hdgst": false, 00:21:25.333 "ddgst": false 00:21:25.333 }, 00:21:25.333 "method": "bdev_nvme_attach_controller" 00:21:25.333 },{ 00:21:25.333 "params": { 00:21:25.333 "name": "Nvme9", 00:21:25.333 "trtype": "tcp", 00:21:25.333 "traddr": "10.0.0.2", 00:21:25.333 "adrfam": "ipv4", 00:21:25.333 "trsvcid": "4420", 00:21:25.333 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:25.333 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:25.333 "hdgst": false, 00:21:25.333 "ddgst": false 00:21:25.333 }, 00:21:25.333 "method": "bdev_nvme_attach_controller" 00:21:25.333 },{ 00:21:25.333 "params": { 00:21:25.333 "name": "Nvme10", 00:21:25.333 "trtype": "tcp", 00:21:25.333 "traddr": "10.0.0.2", 00:21:25.333 "adrfam": "ipv4", 00:21:25.333 "trsvcid": "4420", 00:21:25.333 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:25.333 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:25.333 "hdgst": false, 00:21:25.333 "ddgst": false 00:21:25.333 }, 00:21:25.333 "method": "bdev_nvme_attach_controller" 00:21:25.333 }' 00:21:25.333 [2024-07-15 14:00:20.129400] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:21:25.333 [2024-07-15 14:00:20.129474] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:25.333 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.591 [2024-07-15 14:00:20.194747] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.591 [2024-07-15 14:00:20.305246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.490 14:00:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:27.490 14:00:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:27.490 14:00:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:27.490 14:00:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.490 14:00:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:27.490 14:00:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.490 14:00:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3798380 00:21:27.490 14:00:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:27.490 14:00:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:21:28.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3798380 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3798199 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.427 { 00:21:28.427 "params": { 00:21:28.427 "name": "Nvme$subsystem", 00:21:28.427 "trtype": "$TEST_TRANSPORT", 00:21:28.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.427 "adrfam": "ipv4", 00:21:28.427 "trsvcid": "$NVMF_PORT", 00:21:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.427 "hdgst": ${hdgst:-false}, 00:21:28.427 "ddgst": ${ddgst:-false} 00:21:28.427 }, 00:21:28.427 "method": "bdev_nvme_attach_controller" 00:21:28.427 } 00:21:28.427 EOF 00:21:28.427 )") 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.427 { 00:21:28.427 "params": { 00:21:28.427 "name": "Nvme$subsystem", 00:21:28.427 "trtype": "$TEST_TRANSPORT", 00:21:28.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.427 "adrfam": "ipv4", 00:21:28.427 "trsvcid": "$NVMF_PORT", 00:21:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.427 "hdgst": ${hdgst:-false}, 00:21:28.427 "ddgst": ${ddgst:-false} 00:21:28.427 }, 00:21:28.427 "method": "bdev_nvme_attach_controller" 00:21:28.427 } 00:21:28.427 EOF 00:21:28.427 )") 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.427 { 00:21:28.427 "params": { 00:21:28.427 "name": "Nvme$subsystem", 00:21:28.427 "trtype": "$TEST_TRANSPORT", 00:21:28.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.427 "adrfam": "ipv4", 00:21:28.427 "trsvcid": "$NVMF_PORT", 00:21:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.427 "hdgst": ${hdgst:-false}, 00:21:28.427 "ddgst": ${ddgst:-false} 00:21:28.427 }, 00:21:28.427 "method": "bdev_nvme_attach_controller" 00:21:28.427 } 00:21:28.427 EOF 00:21:28.427 )") 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.427 { 00:21:28.427 "params": { 00:21:28.427 "name": "Nvme$subsystem", 00:21:28.427 "trtype": "$TEST_TRANSPORT", 00:21:28.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.427 "adrfam": "ipv4", 00:21:28.427 "trsvcid": "$NVMF_PORT", 00:21:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.427 "hdgst": ${hdgst:-false}, 00:21:28.427 "ddgst": ${ddgst:-false} 00:21:28.427 }, 00:21:28.427 "method": "bdev_nvme_attach_controller" 00:21:28.427 } 00:21:28.427 EOF 00:21:28.427 )") 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.427 { 00:21:28.427 "params": { 00:21:28.427 "name": "Nvme$subsystem", 00:21:28.427 "trtype": "$TEST_TRANSPORT", 00:21:28.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.427 "adrfam": "ipv4", 00:21:28.427 "trsvcid": "$NVMF_PORT", 00:21:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.427 "hdgst": ${hdgst:-false}, 00:21:28.427 "ddgst": ${ddgst:-false} 00:21:28.427 }, 00:21:28.427 "method": "bdev_nvme_attach_controller" 00:21:28.427 } 00:21:28.427 EOF 00:21:28.427 )") 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.427 { 00:21:28.427 "params": { 00:21:28.427 "name": "Nvme$subsystem", 00:21:28.427 "trtype": "$TEST_TRANSPORT", 00:21:28.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.427 "adrfam": "ipv4", 00:21:28.427 "trsvcid": "$NVMF_PORT", 00:21:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.427 "hdgst": ${hdgst:-false}, 00:21:28.427 "ddgst": ${ddgst:-false} 00:21:28.427 }, 00:21:28.427 "method": "bdev_nvme_attach_controller" 00:21:28.427 } 00:21:28.427 EOF 00:21:28.427 )") 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.427 { 00:21:28.427 "params": { 00:21:28.427 "name": "Nvme$subsystem", 00:21:28.427 "trtype": "$TEST_TRANSPORT", 00:21:28.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.427 "adrfam": "ipv4", 00:21:28.427 "trsvcid": "$NVMF_PORT", 00:21:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.427 "hdgst": ${hdgst:-false}, 00:21:28.427 "ddgst": ${ddgst:-false} 00:21:28.427 }, 00:21:28.427 "method": "bdev_nvme_attach_controller" 00:21:28.427 } 00:21:28.427 EOF 00:21:28.427 )") 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.427 { 00:21:28.427 "params": { 00:21:28.427 "name": "Nvme$subsystem", 00:21:28.427 "trtype": "$TEST_TRANSPORT", 00:21:28.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.427 "adrfam": "ipv4", 00:21:28.427 "trsvcid": "$NVMF_PORT", 00:21:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.427 "hdgst": ${hdgst:-false}, 00:21:28.427 "ddgst": ${ddgst:-false} 00:21:28.427 }, 00:21:28.427 "method": "bdev_nvme_attach_controller" 00:21:28.427 } 00:21:28.427 EOF 00:21:28.427 )") 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.427 { 00:21:28.427 "params": { 00:21:28.427 "name": "Nvme$subsystem", 00:21:28.427 "trtype": "$TEST_TRANSPORT", 00:21:28.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.427 "adrfam": "ipv4", 00:21:28.427 "trsvcid": "$NVMF_PORT", 00:21:28.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.427 "hdgst": ${hdgst:-false}, 00:21:28.427 "ddgst": ${ddgst:-false} 00:21:28.427 }, 00:21:28.427 "method": "bdev_nvme_attach_controller" 00:21:28.427 } 00:21:28.427 EOF 00:21:28.427 )") 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:28.427 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.428 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.428 { 00:21:28.428 "params": { 00:21:28.428 "name": "Nvme$subsystem", 00:21:28.428 "trtype": "$TEST_TRANSPORT", 00:21:28.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.428 "adrfam": "ipv4", 00:21:28.428 "trsvcid": "$NVMF_PORT", 00:21:28.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.428 "hdgst": ${hdgst:-false}, 00:21:28.428 "ddgst": ${ddgst:-false} 00:21:28.428 }, 00:21:28.428 "method": "bdev_nvme_attach_controller" 00:21:28.428 } 00:21:28.428 EOF 00:21:28.428 )") 00:21:28.428 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:28.428 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:28.428 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:28.428 14:00:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:28.428 "params": { 00:21:28.428 "name": "Nvme1", 00:21:28.428 "trtype": "tcp", 00:21:28.428 "traddr": "10.0.0.2", 00:21:28.428 "adrfam": "ipv4", 00:21:28.428 "trsvcid": "4420", 00:21:28.428 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.428 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:28.428 "hdgst": false, 00:21:28.428 "ddgst": false 00:21:28.428 }, 00:21:28.428 "method": "bdev_nvme_attach_controller" 00:21:28.428 },{ 00:21:28.428 "params": { 00:21:28.428 "name": "Nvme2", 00:21:28.428 "trtype": "tcp", 00:21:28.428 "traddr": "10.0.0.2", 00:21:28.428 "adrfam": "ipv4", 00:21:28.428 "trsvcid": "4420", 00:21:28.428 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:28.428 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:28.428 "hdgst": false, 00:21:28.428 "ddgst": false 00:21:28.428 }, 00:21:28.428 "method": "bdev_nvme_attach_controller" 00:21:28.428 },{ 00:21:28.428 "params": { 00:21:28.428 "name": "Nvme3", 00:21:28.428 "trtype": "tcp", 00:21:28.428 "traddr": "10.0.0.2", 00:21:28.428 "adrfam": "ipv4", 00:21:28.428 "trsvcid": "4420", 00:21:28.428 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:28.428 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:28.428 "hdgst": false, 00:21:28.428 "ddgst": false 00:21:28.428 }, 00:21:28.428 "method": "bdev_nvme_attach_controller" 00:21:28.428 },{ 00:21:28.428 "params": { 00:21:28.428 "name": "Nvme4", 00:21:28.428 "trtype": "tcp", 00:21:28.428 "traddr": "10.0.0.2", 00:21:28.428 "adrfam": "ipv4", 00:21:28.428 "trsvcid": "4420", 00:21:28.428 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:28.428 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:28.428 "hdgst": false, 00:21:28.428 "ddgst": false 00:21:28.428 }, 00:21:28.428 "method": "bdev_nvme_attach_controller" 00:21:28.428 },{ 00:21:28.428 "params": { 00:21:28.428 "name": "Nvme5", 00:21:28.428 "trtype": "tcp", 00:21:28.428 "traddr": "10.0.0.2", 00:21:28.428 "adrfam": "ipv4", 00:21:28.428 "trsvcid": "4420", 00:21:28.428 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:28.428 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:28.428 "hdgst": false, 00:21:28.428 "ddgst": false 00:21:28.428 }, 00:21:28.428 "method": "bdev_nvme_attach_controller" 00:21:28.428 },{ 00:21:28.428 "params": { 00:21:28.428 "name": "Nvme6", 00:21:28.428 "trtype": "tcp", 00:21:28.428 "traddr": "10.0.0.2", 00:21:28.428 "adrfam": "ipv4", 00:21:28.428 "trsvcid": "4420", 00:21:28.428 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:28.428 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:28.428 "hdgst": false, 00:21:28.428 "ddgst": false 00:21:28.428 }, 00:21:28.428 "method": "bdev_nvme_attach_controller" 00:21:28.428 },{ 00:21:28.428 "params": { 00:21:28.428 "name": "Nvme7", 00:21:28.428 "trtype": "tcp", 00:21:28.428 "traddr": "10.0.0.2", 00:21:28.428 "adrfam": "ipv4", 00:21:28.428 "trsvcid": "4420", 00:21:28.428 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:28.428 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:28.428 "hdgst": false, 00:21:28.428 "ddgst": false 00:21:28.428 }, 00:21:28.428 "method": "bdev_nvme_attach_controller" 00:21:28.428 },{ 00:21:28.428 "params": { 00:21:28.428 "name": "Nvme8", 00:21:28.428 "trtype": "tcp", 00:21:28.428 "traddr": "10.0.0.2", 00:21:28.428 "adrfam": "ipv4", 00:21:28.428 "trsvcid": "4420", 00:21:28.428 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:28.428 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:28.428 "hdgst": false, 00:21:28.428 "ddgst": false 00:21:28.428 }, 00:21:28.428 "method": "bdev_nvme_attach_controller" 00:21:28.428 },{ 00:21:28.428 "params": { 00:21:28.428 "name": "Nvme9", 00:21:28.428 "trtype": "tcp", 00:21:28.428 "traddr": "10.0.0.2", 00:21:28.428 "adrfam": "ipv4", 00:21:28.428 "trsvcid": "4420", 00:21:28.428 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:28.428 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:28.428 "hdgst": false, 00:21:28.428 "ddgst": false 00:21:28.428 }, 00:21:28.428 "method": "bdev_nvme_attach_controller" 00:21:28.428 },{ 00:21:28.428 "params": { 00:21:28.428 "name": "Nvme10", 00:21:28.428 "trtype": "tcp", 00:21:28.428 "traddr": "10.0.0.2", 00:21:28.428 "adrfam": "ipv4", 00:21:28.428 "trsvcid": "4420", 00:21:28.428 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:28.428 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:28.428 "hdgst": false, 00:21:28.428 "ddgst": false 00:21:28.428 }, 00:21:28.428 "method": "bdev_nvme_attach_controller" 00:21:28.428 }' 00:21:28.428 [2024-07-15 14:00:23.160956] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:21:28.428 [2024-07-15 14:00:23.161040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3798682 ] 00:21:28.428 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.428 [2024-07-15 14:00:23.229158] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.685 [2024-07-15 14:00:23.344842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.057 Running I/O for 1 seconds... 00:21:31.438 00:21:31.438 Latency(us) 00:21:31.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.438 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.438 Verification LBA range: start 0x0 length 0x400 00:21:31.438 Nvme1n1 : 1.15 227.13 14.20 0.00 0.00 273441.82 18350.08 264085.81 00:21:31.438 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.438 Verification LBA range: start 0x0 length 0x400 00:21:31.438 Nvme2n1 : 1.17 219.47 13.72 0.00 0.00 282123.76 6213.78 267192.70 00:21:31.438 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.438 Verification LBA range: start 0x0 length 0x400 00:21:31.438 Nvme3n1 : 1.11 234.58 14.66 0.00 0.00 254455.38 18058.81 267192.70 00:21:31.438 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.438 Verification LBA range: start 0x0 length 0x400 00:21:31.438 Nvme4n1 : 1.11 231.59 14.47 0.00 0.00 258264.18 19903.53 259425.47 00:21:31.438 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.439 Verification LBA range: start 0x0 length 0x400 00:21:31.439 Nvme5n1 : 1.17 218.27 13.64 0.00 0.00 272129.52 21456.97 268746.15 00:21:31.439 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.439 Verification LBA range: start 0x0 length 0x400 00:21:31.439 Nvme6n1 : 1.12 228.36 14.27 0.00 0.00 254122.67 20388.98 260978.92 00:21:31.439 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.439 Verification LBA range: start 0x0 length 0x400 00:21:31.439 Nvme7n1 : 1.16 221.00 13.81 0.00 0.00 259559.73 17573.36 262532.36 00:21:31.439 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.439 Verification LBA range: start 0x0 length 0x400 00:21:31.439 Nvme8n1 : 1.19 269.70 16.86 0.00 0.00 209624.48 13981.01 250104.79 00:21:31.439 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.439 Verification LBA range: start 0x0 length 0x400 00:21:31.439 Nvme9n1 : 1.18 216.55 13.53 0.00 0.00 256539.31 21359.88 299815.06 00:21:31.439 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.439 Verification LBA range: start 0x0 length 0x400 00:21:31.439 Nvme10n1 : 1.18 220.91 13.81 0.00 0.00 246974.98 5946.79 278066.82 00:21:31.439 =================================================================================================================== 00:21:31.439 Total : 2287.56 142.97 0.00 0.00 255593.96 5946.79 299815.06 00:21:31.439 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:21:31.439 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:31.439 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:31.439 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:31.439 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:31.439 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:31.439 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:21:31.439 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:31.439 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:21:31.439 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:31.439 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:31.439 rmmod nvme_tcp 00:21:31.439 rmmod nvme_fabrics 00:21:31.697 rmmod nvme_keyring 00:21:31.697 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:31.697 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:21:31.697 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:21:31.697 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3798199 ']' 00:21:31.697 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3798199 00:21:31.697 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 3798199 ']' 00:21:31.697 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 3798199 00:21:31.697 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:21:31.697 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:31.697 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3798199 00:21:31.697 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:31.697 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:31.697 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3798199' 00:21:31.697 killing process with pid 3798199 00:21:31.697 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 3798199 00:21:31.697 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 3798199 00:21:32.263 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:32.263 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:32.263 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:32.263 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:32.263 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:32.263 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.263 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:32.263 14:00:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:34.165 00:21:34.165 real 0m11.963s 00:21:34.165 user 0m34.616s 00:21:34.165 sys 0m3.270s 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:34.165 ************************************ 00:21:34.165 END TEST nvmf_shutdown_tc1 00:21:34.165 ************************************ 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:34.165 ************************************ 00:21:34.165 START TEST nvmf_shutdown_tc2 00:21:34.165 ************************************ 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:34.165 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:34.165 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.165 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:34.166 Found net devices under 0000:84:00.0: cvl_0_0 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:34.166 Found net devices under 0000:84:00.1: cvl_0_1 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:34.166 14:00:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:34.166 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:34.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:34.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:21:34.423 00:21:34.423 --- 10.0.0.2 ping statistics --- 00:21:34.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.423 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:34.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:34.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:21:34.423 00:21:34.423 --- 10.0.0.1 ping statistics --- 00:21:34.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.423 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3799565 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3799565 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3799565 ']' 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:34.423 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.423 [2024-07-15 14:00:29.189023] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:21:34.423 [2024-07-15 14:00:29.189111] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:34.423 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.423 [2024-07-15 14:00:29.251479] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:34.681 [2024-07-15 14:00:29.356486] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:34.681 [2024-07-15 14:00:29.356558] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:34.681 [2024-07-15 14:00:29.356585] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:34.681 [2024-07-15 14:00:29.356597] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:34.681 [2024-07-15 14:00:29.356607] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:34.681 [2024-07-15 14:00:29.356690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:34.681 [2024-07-15 14:00:29.356821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:34.681 [2024-07-15 14:00:29.356870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:34.681 [2024-07-15 14:00:29.356873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.681 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:34.681 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:34.681 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:34.681 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:34.681 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.681 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.681 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:34.681 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.681 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.681 [2024-07-15 14:00:29.506397] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.681 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.681 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:34.681 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:34.681 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:34.681 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.681 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:34.681 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:34.681 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:34.937 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:34.937 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:34.937 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:34.937 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:34.937 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:34.937 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:34.937 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:34.937 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:34.937 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:34.937 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:34.937 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:34.937 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:34.937 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:34.937 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:34.937 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:34.937 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:34.937 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:34.937 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:34.937 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:34.937 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.937 14:00:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.937 Malloc1 00:21:34.937 [2024-07-15 14:00:29.591704] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.937 Malloc2 00:21:34.937 Malloc3 00:21:34.937 Malloc4 00:21:34.937 Malloc5 00:21:35.194 Malloc6 00:21:35.194 Malloc7 00:21:35.194 Malloc8 00:21:35.194 Malloc9 00:21:35.194 Malloc10 00:21:35.452 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.452 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:35.452 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:35.452 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:35.452 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3799744 00:21:35.452 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3799744 /var/tmp/bdevperf.sock 00:21:35.452 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3799744 ']' 00:21:35.452 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:35.452 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:35.452 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:35.452 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:21:35.452 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:35.452 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:21:35.452 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:35.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:35.452 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:35.452 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:35.452 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:35.452 { 00:21:35.452 "params": { 00:21:35.452 "name": "Nvme$subsystem", 00:21:35.452 "trtype": "$TEST_TRANSPORT", 00:21:35.452 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.452 "adrfam": "ipv4", 00:21:35.452 "trsvcid": "$NVMF_PORT", 00:21:35.452 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.452 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.452 "hdgst": ${hdgst:-false}, 00:21:35.452 "ddgst": ${ddgst:-false} 00:21:35.452 }, 00:21:35.452 "method": "bdev_nvme_attach_controller" 00:21:35.452 } 00:21:35.452 EOF 00:21:35.452 )") 00:21:35.452 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:35.452 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:35.452 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:35.452 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:35.452 { 00:21:35.452 "params": { 00:21:35.452 "name": "Nvme$subsystem", 00:21:35.452 "trtype": "$TEST_TRANSPORT", 00:21:35.452 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.452 "adrfam": "ipv4", 00:21:35.452 "trsvcid": "$NVMF_PORT", 00:21:35.452 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.452 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.452 "hdgst": ${hdgst:-false}, 00:21:35.452 "ddgst": ${ddgst:-false} 00:21:35.452 }, 00:21:35.452 "method": "bdev_nvme_attach_controller" 00:21:35.452 } 00:21:35.452 EOF 00:21:35.452 )") 00:21:35.452 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:35.452 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:35.452 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:35.452 { 00:21:35.452 "params": { 00:21:35.452 "name": "Nvme$subsystem", 00:21:35.452 "trtype": "$TEST_TRANSPORT", 00:21:35.452 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.452 "adrfam": "ipv4", 00:21:35.452 "trsvcid": "$NVMF_PORT", 00:21:35.452 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.452 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.452 "hdgst": ${hdgst:-false}, 00:21:35.452 "ddgst": ${ddgst:-false} 00:21:35.452 }, 00:21:35.452 "method": "bdev_nvme_attach_controller" 00:21:35.452 } 00:21:35.452 EOF 00:21:35.453 )") 00:21:35.453 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:35.453 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:35.453 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:35.453 { 00:21:35.453 "params": { 00:21:35.453 "name": "Nvme$subsystem", 00:21:35.453 "trtype": "$TEST_TRANSPORT", 00:21:35.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.453 "adrfam": "ipv4", 00:21:35.453 "trsvcid": "$NVMF_PORT", 00:21:35.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.453 "hdgst": ${hdgst:-false}, 00:21:35.453 "ddgst": ${ddgst:-false} 00:21:35.453 }, 00:21:35.453 "method": "bdev_nvme_attach_controller" 00:21:35.453 } 00:21:35.453 EOF 00:21:35.453 )") 00:21:35.453 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:35.453 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:35.453 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:35.453 { 00:21:35.453 "params": { 00:21:35.453 "name": "Nvme$subsystem", 00:21:35.453 "trtype": "$TEST_TRANSPORT", 00:21:35.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.453 "adrfam": "ipv4", 00:21:35.453 "trsvcid": "$NVMF_PORT", 00:21:35.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.453 "hdgst": ${hdgst:-false}, 00:21:35.453 "ddgst": ${ddgst:-false} 00:21:35.453 }, 00:21:35.453 "method": "bdev_nvme_attach_controller" 00:21:35.453 } 00:21:35.453 EOF 00:21:35.453 )") 00:21:35.453 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:35.453 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:35.453 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:35.453 { 00:21:35.453 "params": { 00:21:35.453 "name": "Nvme$subsystem", 00:21:35.453 "trtype": "$TEST_TRANSPORT", 00:21:35.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.453 "adrfam": "ipv4", 00:21:35.453 "trsvcid": "$NVMF_PORT", 00:21:35.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.453 "hdgst": ${hdgst:-false}, 00:21:35.453 "ddgst": ${ddgst:-false} 00:21:35.453 }, 00:21:35.453 "method": "bdev_nvme_attach_controller" 00:21:35.453 } 00:21:35.453 EOF 00:21:35.453 )") 00:21:35.453 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:35.453 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:35.453 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:35.453 { 00:21:35.453 "params": { 00:21:35.453 "name": "Nvme$subsystem", 00:21:35.453 "trtype": "$TEST_TRANSPORT", 00:21:35.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.453 "adrfam": "ipv4", 00:21:35.453 "trsvcid": "$NVMF_PORT", 00:21:35.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.453 "hdgst": ${hdgst:-false}, 00:21:35.453 "ddgst": ${ddgst:-false} 00:21:35.453 }, 00:21:35.453 "method": "bdev_nvme_attach_controller" 00:21:35.453 } 00:21:35.453 EOF 00:21:35.453 )") 00:21:35.453 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:35.453 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:35.453 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:35.453 { 00:21:35.453 "params": { 00:21:35.453 "name": "Nvme$subsystem", 00:21:35.453 "trtype": "$TEST_TRANSPORT", 00:21:35.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.453 "adrfam": "ipv4", 00:21:35.453 "trsvcid": "$NVMF_PORT", 00:21:35.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.453 "hdgst": ${hdgst:-false}, 00:21:35.453 "ddgst": ${ddgst:-false} 00:21:35.453 }, 00:21:35.453 "method": "bdev_nvme_attach_controller" 00:21:35.453 } 00:21:35.453 EOF 00:21:35.453 )") 00:21:35.453 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:35.453 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:35.453 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:35.453 { 00:21:35.453 "params": { 00:21:35.453 "name": "Nvme$subsystem", 00:21:35.453 "trtype": "$TEST_TRANSPORT", 00:21:35.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.453 "adrfam": "ipv4", 00:21:35.453 "trsvcid": "$NVMF_PORT", 00:21:35.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.453 "hdgst": ${hdgst:-false}, 00:21:35.453 "ddgst": ${ddgst:-false} 00:21:35.453 }, 00:21:35.453 "method": "bdev_nvme_attach_controller" 00:21:35.453 } 00:21:35.453 EOF 00:21:35.453 )") 00:21:35.453 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:35.453 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:35.453 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:35.453 { 00:21:35.453 "params": { 00:21:35.453 "name": "Nvme$subsystem", 00:21:35.453 "trtype": "$TEST_TRANSPORT", 00:21:35.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.453 "adrfam": "ipv4", 00:21:35.453 "trsvcid": "$NVMF_PORT", 00:21:35.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.453 "hdgst": ${hdgst:-false}, 00:21:35.453 "ddgst": ${ddgst:-false} 00:21:35.453 }, 00:21:35.453 "method": "bdev_nvme_attach_controller" 00:21:35.453 } 00:21:35.453 EOF 00:21:35.453 )") 00:21:35.453 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:35.453 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:21:35.453 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:21:35.453 14:00:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:35.453 "params": { 00:21:35.453 "name": "Nvme1", 00:21:35.453 "trtype": "tcp", 00:21:35.453 "traddr": "10.0.0.2", 00:21:35.453 "adrfam": "ipv4", 00:21:35.453 "trsvcid": "4420", 00:21:35.453 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:35.453 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:35.453 "hdgst": false, 00:21:35.453 "ddgst": false 00:21:35.453 }, 00:21:35.453 "method": "bdev_nvme_attach_controller" 00:21:35.453 },{ 00:21:35.453 "params": { 00:21:35.453 "name": "Nvme2", 00:21:35.453 "trtype": "tcp", 00:21:35.453 "traddr": "10.0.0.2", 00:21:35.453 "adrfam": "ipv4", 00:21:35.453 "trsvcid": "4420", 00:21:35.453 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:35.453 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:35.453 "hdgst": false, 00:21:35.453 "ddgst": false 00:21:35.453 }, 00:21:35.453 "method": "bdev_nvme_attach_controller" 00:21:35.453 },{ 00:21:35.453 "params": { 00:21:35.453 "name": "Nvme3", 00:21:35.453 "trtype": "tcp", 00:21:35.453 "traddr": "10.0.0.2", 00:21:35.453 "adrfam": "ipv4", 00:21:35.453 "trsvcid": "4420", 00:21:35.453 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:35.453 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:35.453 "hdgst": false, 00:21:35.453 "ddgst": false 00:21:35.453 }, 00:21:35.453 "method": "bdev_nvme_attach_controller" 00:21:35.453 },{ 00:21:35.453 "params": { 00:21:35.453 "name": "Nvme4", 00:21:35.453 "trtype": "tcp", 00:21:35.453 "traddr": "10.0.0.2", 00:21:35.453 "adrfam": "ipv4", 00:21:35.453 "trsvcid": "4420", 00:21:35.453 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:35.453 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:35.453 "hdgst": false, 00:21:35.453 "ddgst": false 00:21:35.453 }, 00:21:35.453 "method": "bdev_nvme_attach_controller" 00:21:35.453 },{ 00:21:35.453 "params": { 00:21:35.453 "name": "Nvme5", 00:21:35.453 "trtype": "tcp", 00:21:35.453 "traddr": "10.0.0.2", 00:21:35.453 "adrfam": "ipv4", 00:21:35.453 "trsvcid": "4420", 00:21:35.453 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:35.453 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:35.453 "hdgst": false, 00:21:35.453 "ddgst": false 00:21:35.453 }, 00:21:35.453 "method": "bdev_nvme_attach_controller" 00:21:35.453 },{ 00:21:35.453 "params": { 00:21:35.453 "name": "Nvme6", 00:21:35.453 "trtype": "tcp", 00:21:35.453 "traddr": "10.0.0.2", 00:21:35.453 "adrfam": "ipv4", 00:21:35.453 "trsvcid": "4420", 00:21:35.453 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:35.453 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:35.453 "hdgst": false, 00:21:35.453 "ddgst": false 00:21:35.453 }, 00:21:35.453 "method": "bdev_nvme_attach_controller" 00:21:35.453 },{ 00:21:35.453 "params": { 00:21:35.453 "name": "Nvme7", 00:21:35.453 "trtype": "tcp", 00:21:35.453 "traddr": "10.0.0.2", 00:21:35.453 "adrfam": "ipv4", 00:21:35.453 "trsvcid": "4420", 00:21:35.453 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:35.453 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:35.453 "hdgst": false, 00:21:35.453 "ddgst": false 00:21:35.453 }, 00:21:35.453 "method": "bdev_nvme_attach_controller" 00:21:35.453 },{ 00:21:35.453 "params": { 00:21:35.453 "name": "Nvme8", 00:21:35.453 "trtype": "tcp", 00:21:35.453 "traddr": "10.0.0.2", 00:21:35.453 "adrfam": "ipv4", 00:21:35.453 "trsvcid": "4420", 00:21:35.453 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:35.453 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:35.453 "hdgst": false, 00:21:35.453 "ddgst": false 00:21:35.453 }, 00:21:35.454 "method": "bdev_nvme_attach_controller" 00:21:35.454 },{ 00:21:35.454 "params": { 00:21:35.454 "name": "Nvme9", 00:21:35.454 "trtype": "tcp", 00:21:35.454 "traddr": "10.0.0.2", 00:21:35.454 "adrfam": "ipv4", 00:21:35.454 "trsvcid": "4420", 00:21:35.454 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:35.454 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:35.454 "hdgst": false, 00:21:35.454 "ddgst": false 00:21:35.454 }, 00:21:35.454 "method": "bdev_nvme_attach_controller" 00:21:35.454 },{ 00:21:35.454 "params": { 00:21:35.454 "name": "Nvme10", 00:21:35.454 "trtype": "tcp", 00:21:35.454 "traddr": "10.0.0.2", 00:21:35.454 "adrfam": "ipv4", 00:21:35.454 "trsvcid": "4420", 00:21:35.454 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:35.454 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:35.454 "hdgst": false, 00:21:35.454 "ddgst": false 00:21:35.454 }, 00:21:35.454 "method": "bdev_nvme_attach_controller" 00:21:35.454 }' 00:21:35.454 [2024-07-15 14:00:30.105418] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:21:35.454 [2024-07-15 14:00:30.105511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3799744 ] 00:21:35.454 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.454 [2024-07-15 14:00:30.168208] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.454 [2024-07-15 14:00:30.278779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.349 Running I/O for 10 seconds... 00:21:37.349 14:00:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:37.349 14:00:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:37.349 14:00:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:37.349 14:00:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.349 14:00:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.349 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.349 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:37.349 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:37.349 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:37.349 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:21:37.349 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:21:37.349 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:37.349 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:37.349 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:37.349 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:37.349 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.349 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.349 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.349 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:37.349 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:37.349 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:37.606 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:37.606 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:37.606 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:37.606 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:37.606 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.606 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.606 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.606 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:37.606 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:37.606 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:37.863 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:37.863 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:37.863 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:37.863 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.863 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:37.863 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.863 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.863 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=136 00:21:37.863 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 136 -ge 100 ']' 00:21:37.863 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:21:37.863 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:21:37.863 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:21:37.863 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3799744 00:21:37.863 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3799744 ']' 00:21:37.863 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3799744 00:21:37.863 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:37.863 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:37.863 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3799744 00:21:37.863 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:37.863 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:37.863 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3799744' 00:21:37.863 killing process with pid 3799744 00:21:37.863 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3799744 00:21:37.863 14:00:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3799744 00:21:38.120 Received shutdown signal, test time was about 0.903111 seconds 00:21:38.120 00:21:38.120 Latency(us) 00:21:38.120 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.120 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.120 Verification LBA range: start 0x0 length 0x400 00:21:38.120 Nvme1n1 : 0.90 285.00 17.81 0.00 0.00 220613.97 19418.07 231463.44 00:21:38.120 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.120 Verification LBA range: start 0x0 length 0x400 00:21:38.120 Nvme2n1 : 0.89 216.16 13.51 0.00 0.00 286375.13 34175.81 243891.01 00:21:38.120 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.120 Verification LBA range: start 0x0 length 0x400 00:21:38.120 Nvme3n1 : 0.88 240.34 15.02 0.00 0.00 245886.42 13301.38 242337.56 00:21:38.121 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.121 Verification LBA range: start 0x0 length 0x400 00:21:38.121 Nvme4n1 : 0.87 230.79 14.42 0.00 0.00 252634.48 3155.44 256318.58 00:21:38.121 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.121 Verification LBA range: start 0x0 length 0x400 00:21:38.121 Nvme5n1 : 0.90 214.20 13.39 0.00 0.00 270654.83 20971.52 281173.71 00:21:38.121 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.121 Verification LBA range: start 0x0 length 0x400 00:21:38.121 Nvme6n1 : 0.86 222.56 13.91 0.00 0.00 252893.87 18738.44 262532.36 00:21:38.121 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.121 Verification LBA range: start 0x0 length 0x400 00:21:38.121 Nvme7n1 : 0.87 220.16 13.76 0.00 0.00 250591.00 19029.71 251658.24 00:21:38.121 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.121 Verification LBA range: start 0x0 length 0x400 00:21:38.121 Nvme8n1 : 0.89 216.86 13.55 0.00 0.00 249257.78 17767.54 262532.36 00:21:38.121 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.121 Verification LBA range: start 0x0 length 0x400 00:21:38.121 Nvme9n1 : 0.88 219.07 13.69 0.00 0.00 240191.46 37865.24 237677.23 00:21:38.121 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.121 Verification LBA range: start 0x0 length 0x400 00:21:38.121 Nvme10n1 : 0.90 212.79 13.30 0.00 0.00 243160.30 20680.25 290494.39 00:21:38.121 =================================================================================================================== 00:21:38.121 Total : 2277.93 142.37 0.00 0.00 250206.86 3155.44 290494.39 00:21:38.378 14:00:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:21:39.309 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3799565 00:21:39.309 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:21:39.309 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:39.309 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:39.309 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:39.309 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:39.309 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:39.309 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:21:39.309 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:39.309 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:21:39.309 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:39.309 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:39.309 rmmod nvme_tcp 00:21:39.309 rmmod nvme_fabrics 00:21:39.309 rmmod nvme_keyring 00:21:39.309 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:39.309 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:21:39.309 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:21:39.309 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3799565 ']' 00:21:39.309 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3799565 00:21:39.309 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3799565 ']' 00:21:39.309 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3799565 00:21:39.309 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:39.309 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:39.309 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3799565 00:21:39.309 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:39.309 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:39.309 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3799565' 00:21:39.309 killing process with pid 3799565 00:21:39.309 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3799565 00:21:39.309 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3799565 00:21:39.873 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:39.873 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:39.873 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:39.873 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:39.873 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:39.873 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.873 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:39.873 14:00:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:42.404 00:21:42.404 real 0m7.678s 00:21:42.404 user 0m23.332s 00:21:42.404 sys 0m1.435s 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:42.404 ************************************ 00:21:42.404 END TEST nvmf_shutdown_tc2 00:21:42.404 ************************************ 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:42.404 ************************************ 00:21:42.404 START TEST nvmf_shutdown_tc3 00:21:42.404 ************************************ 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:42.404 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:42.405 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:42.405 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:42.405 Found net devices under 0000:84:00.0: cvl_0_0 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:42.405 Found net devices under 0000:84:00.1: cvl_0_1 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:42.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:42.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:21:42.405 00:21:42.405 --- 10.0.0.2 ping statistics --- 00:21:42.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.405 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:42.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:42.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:21:42.405 00:21:42.405 --- 10.0.0.1 ping statistics --- 00:21:42.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.405 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3800655 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3800655 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3800655 ']' 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:42.405 14:00:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.405 [2024-07-15 14:00:36.942274] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:21:42.405 [2024-07-15 14:00:36.942358] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.405 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.405 [2024-07-15 14:00:37.007043] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:42.405 [2024-07-15 14:00:37.108538] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.405 [2024-07-15 14:00:37.108610] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.405 [2024-07-15 14:00:37.108624] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.405 [2024-07-15 14:00:37.108635] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.405 [2024-07-15 14:00:37.108644] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.405 [2024-07-15 14:00:37.108824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:42.405 [2024-07-15 14:00:37.108851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:42.406 [2024-07-15 14:00:37.108908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:42.406 [2024-07-15 14:00:37.108911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.406 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:42.406 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:42.406 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:42.406 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:42.406 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.663 [2024-07-15 14:00:37.269757] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.663 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.663 Malloc1 00:21:42.663 [2024-07-15 14:00:37.355333] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.663 Malloc2 00:21:42.663 Malloc3 00:21:42.663 Malloc4 00:21:42.920 Malloc5 00:21:42.920 Malloc6 00:21:42.920 Malloc7 00:21:42.920 Malloc8 00:21:42.920 Malloc9 00:21:43.184 Malloc10 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3800716 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3800716 /var/tmp/bdevperf.sock 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3800716 ']' 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:43.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.184 { 00:21:43.184 "params": { 00:21:43.184 "name": "Nvme$subsystem", 00:21:43.184 "trtype": "$TEST_TRANSPORT", 00:21:43.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.184 "adrfam": "ipv4", 00:21:43.184 "trsvcid": "$NVMF_PORT", 00:21:43.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.184 "hdgst": ${hdgst:-false}, 00:21:43.184 "ddgst": ${ddgst:-false} 00:21:43.184 }, 00:21:43.184 "method": "bdev_nvme_attach_controller" 00:21:43.184 } 00:21:43.184 EOF 00:21:43.184 )") 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.184 { 00:21:43.184 "params": { 00:21:43.184 "name": "Nvme$subsystem", 00:21:43.184 "trtype": "$TEST_TRANSPORT", 00:21:43.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.184 "adrfam": "ipv4", 00:21:43.184 "trsvcid": "$NVMF_PORT", 00:21:43.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.184 "hdgst": ${hdgst:-false}, 00:21:43.184 "ddgst": ${ddgst:-false} 00:21:43.184 }, 00:21:43.184 "method": "bdev_nvme_attach_controller" 00:21:43.184 } 00:21:43.184 EOF 00:21:43.184 )") 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.184 { 00:21:43.184 "params": { 00:21:43.184 "name": "Nvme$subsystem", 00:21:43.184 "trtype": "$TEST_TRANSPORT", 00:21:43.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.184 "adrfam": "ipv4", 00:21:43.184 "trsvcid": "$NVMF_PORT", 00:21:43.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.184 "hdgst": ${hdgst:-false}, 00:21:43.184 "ddgst": ${ddgst:-false} 00:21:43.184 }, 00:21:43.184 "method": "bdev_nvme_attach_controller" 00:21:43.184 } 00:21:43.184 EOF 00:21:43.184 )") 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.184 { 00:21:43.184 "params": { 00:21:43.184 "name": "Nvme$subsystem", 00:21:43.184 "trtype": "$TEST_TRANSPORT", 00:21:43.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.184 "adrfam": "ipv4", 00:21:43.184 "trsvcid": "$NVMF_PORT", 00:21:43.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.184 "hdgst": ${hdgst:-false}, 00:21:43.184 "ddgst": ${ddgst:-false} 00:21:43.184 }, 00:21:43.184 "method": "bdev_nvme_attach_controller" 00:21:43.184 } 00:21:43.184 EOF 00:21:43.184 )") 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.184 { 00:21:43.184 "params": { 00:21:43.184 "name": "Nvme$subsystem", 00:21:43.184 "trtype": "$TEST_TRANSPORT", 00:21:43.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.184 "adrfam": "ipv4", 00:21:43.184 "trsvcid": "$NVMF_PORT", 00:21:43.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.184 "hdgst": ${hdgst:-false}, 00:21:43.184 "ddgst": ${ddgst:-false} 00:21:43.184 }, 00:21:43.184 "method": "bdev_nvme_attach_controller" 00:21:43.184 } 00:21:43.184 EOF 00:21:43.184 )") 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.184 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.184 { 00:21:43.184 "params": { 00:21:43.184 "name": "Nvme$subsystem", 00:21:43.184 "trtype": "$TEST_TRANSPORT", 00:21:43.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.184 "adrfam": "ipv4", 00:21:43.184 "trsvcid": "$NVMF_PORT", 00:21:43.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.184 "hdgst": ${hdgst:-false}, 00:21:43.185 "ddgst": ${ddgst:-false} 00:21:43.185 }, 00:21:43.185 "method": "bdev_nvme_attach_controller" 00:21:43.185 } 00:21:43.185 EOF 00:21:43.185 )") 00:21:43.185 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:43.185 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.185 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.185 { 00:21:43.185 "params": { 00:21:43.185 "name": "Nvme$subsystem", 00:21:43.185 "trtype": "$TEST_TRANSPORT", 00:21:43.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.185 "adrfam": "ipv4", 00:21:43.185 "trsvcid": "$NVMF_PORT", 00:21:43.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.185 "hdgst": ${hdgst:-false}, 00:21:43.185 "ddgst": ${ddgst:-false} 00:21:43.185 }, 00:21:43.185 "method": "bdev_nvme_attach_controller" 00:21:43.185 } 00:21:43.185 EOF 00:21:43.185 )") 00:21:43.185 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:43.185 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.185 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.185 { 00:21:43.185 "params": { 00:21:43.185 "name": "Nvme$subsystem", 00:21:43.185 "trtype": "$TEST_TRANSPORT", 00:21:43.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.185 "adrfam": "ipv4", 00:21:43.185 "trsvcid": "$NVMF_PORT", 00:21:43.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.185 "hdgst": ${hdgst:-false}, 00:21:43.185 "ddgst": ${ddgst:-false} 00:21:43.185 }, 00:21:43.185 "method": "bdev_nvme_attach_controller" 00:21:43.185 } 00:21:43.185 EOF 00:21:43.185 )") 00:21:43.185 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:43.185 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.185 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.185 { 00:21:43.185 "params": { 00:21:43.185 "name": "Nvme$subsystem", 00:21:43.185 "trtype": "$TEST_TRANSPORT", 00:21:43.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.185 "adrfam": "ipv4", 00:21:43.185 "trsvcid": "$NVMF_PORT", 00:21:43.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.185 "hdgst": ${hdgst:-false}, 00:21:43.185 "ddgst": ${ddgst:-false} 00:21:43.185 }, 00:21:43.185 "method": "bdev_nvme_attach_controller" 00:21:43.185 } 00:21:43.185 EOF 00:21:43.185 )") 00:21:43.185 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:43.185 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.185 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.185 { 00:21:43.185 "params": { 00:21:43.185 "name": "Nvme$subsystem", 00:21:43.185 "trtype": "$TEST_TRANSPORT", 00:21:43.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.185 "adrfam": "ipv4", 00:21:43.185 "trsvcid": "$NVMF_PORT", 00:21:43.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.185 "hdgst": ${hdgst:-false}, 00:21:43.185 "ddgst": ${ddgst:-false} 00:21:43.185 }, 00:21:43.185 "method": "bdev_nvme_attach_controller" 00:21:43.185 } 00:21:43.185 EOF 00:21:43.185 )") 00:21:43.185 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:43.185 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:21:43.185 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:21:43.185 14:00:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:43.185 "params": { 00:21:43.185 "name": "Nvme1", 00:21:43.185 "trtype": "tcp", 00:21:43.185 "traddr": "10.0.0.2", 00:21:43.185 "adrfam": "ipv4", 00:21:43.185 "trsvcid": "4420", 00:21:43.185 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.185 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:43.185 "hdgst": false, 00:21:43.185 "ddgst": false 00:21:43.185 }, 00:21:43.185 "method": "bdev_nvme_attach_controller" 00:21:43.185 },{ 00:21:43.185 "params": { 00:21:43.185 "name": "Nvme2", 00:21:43.185 "trtype": "tcp", 00:21:43.185 "traddr": "10.0.0.2", 00:21:43.185 "adrfam": "ipv4", 00:21:43.185 "trsvcid": "4420", 00:21:43.185 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:43.185 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:43.185 "hdgst": false, 00:21:43.185 "ddgst": false 00:21:43.185 }, 00:21:43.185 "method": "bdev_nvme_attach_controller" 00:21:43.185 },{ 00:21:43.185 "params": { 00:21:43.185 "name": "Nvme3", 00:21:43.185 "trtype": "tcp", 00:21:43.185 "traddr": "10.0.0.2", 00:21:43.185 "adrfam": "ipv4", 00:21:43.185 "trsvcid": "4420", 00:21:43.185 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:43.185 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:43.185 "hdgst": false, 00:21:43.185 "ddgst": false 00:21:43.185 }, 00:21:43.185 "method": "bdev_nvme_attach_controller" 00:21:43.185 },{ 00:21:43.185 "params": { 00:21:43.185 "name": "Nvme4", 00:21:43.185 "trtype": "tcp", 00:21:43.185 "traddr": "10.0.0.2", 00:21:43.185 "adrfam": "ipv4", 00:21:43.185 "trsvcid": "4420", 00:21:43.185 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:43.185 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:43.185 "hdgst": false, 00:21:43.185 "ddgst": false 00:21:43.185 }, 00:21:43.185 "method": "bdev_nvme_attach_controller" 00:21:43.185 },{ 00:21:43.185 "params": { 00:21:43.185 "name": "Nvme5", 00:21:43.185 "trtype": "tcp", 00:21:43.185 "traddr": "10.0.0.2", 00:21:43.185 "adrfam": "ipv4", 00:21:43.185 "trsvcid": "4420", 00:21:43.185 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:43.185 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:43.185 "hdgst": false, 00:21:43.185 "ddgst": false 00:21:43.185 }, 00:21:43.185 "method": "bdev_nvme_attach_controller" 00:21:43.185 },{ 00:21:43.185 "params": { 00:21:43.185 "name": "Nvme6", 00:21:43.185 "trtype": "tcp", 00:21:43.185 "traddr": "10.0.0.2", 00:21:43.185 "adrfam": "ipv4", 00:21:43.185 "trsvcid": "4420", 00:21:43.185 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:43.185 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:43.185 "hdgst": false, 00:21:43.185 "ddgst": false 00:21:43.185 }, 00:21:43.185 "method": "bdev_nvme_attach_controller" 00:21:43.185 },{ 00:21:43.185 "params": { 00:21:43.185 "name": "Nvme7", 00:21:43.185 "trtype": "tcp", 00:21:43.185 "traddr": "10.0.0.2", 00:21:43.185 "adrfam": "ipv4", 00:21:43.185 "trsvcid": "4420", 00:21:43.185 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:43.185 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:43.185 "hdgst": false, 00:21:43.185 "ddgst": false 00:21:43.185 }, 00:21:43.185 "method": "bdev_nvme_attach_controller" 00:21:43.185 },{ 00:21:43.185 "params": { 00:21:43.185 "name": "Nvme8", 00:21:43.185 "trtype": "tcp", 00:21:43.185 "traddr": "10.0.0.2", 00:21:43.185 "adrfam": "ipv4", 00:21:43.185 "trsvcid": "4420", 00:21:43.185 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:43.185 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:43.185 "hdgst": false, 00:21:43.185 "ddgst": false 00:21:43.185 }, 00:21:43.185 "method": "bdev_nvme_attach_controller" 00:21:43.185 },{ 00:21:43.185 "params": { 00:21:43.185 "name": "Nvme9", 00:21:43.185 "trtype": "tcp", 00:21:43.185 "traddr": "10.0.0.2", 00:21:43.185 "adrfam": "ipv4", 00:21:43.185 "trsvcid": "4420", 00:21:43.185 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:43.185 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:43.185 "hdgst": false, 00:21:43.185 "ddgst": false 00:21:43.185 }, 00:21:43.185 "method": "bdev_nvme_attach_controller" 00:21:43.185 },{ 00:21:43.185 "params": { 00:21:43.185 "name": "Nvme10", 00:21:43.185 "trtype": "tcp", 00:21:43.185 "traddr": "10.0.0.2", 00:21:43.185 "adrfam": "ipv4", 00:21:43.185 "trsvcid": "4420", 00:21:43.185 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:43.185 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:43.185 "hdgst": false, 00:21:43.185 "ddgst": false 00:21:43.185 }, 00:21:43.185 "method": "bdev_nvme_attach_controller" 00:21:43.185 }' 00:21:43.185 [2024-07-15 14:00:37.853385] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:21:43.185 [2024-07-15 14:00:37.853481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3800716 ] 00:21:43.185 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.185 [2024-07-15 14:00:37.920949] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.441 [2024-07-15 14:00:38.033972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.332 Running I/O for 10 seconds... 00:21:45.333 14:00:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:45.333 14:00:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:45.333 14:00:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:45.333 14:00:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.333 14:00:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:45.333 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.333 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:45.333 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:45.333 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:45.333 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:45.333 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:21:45.333 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:21:45.333 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:45.333 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:45.333 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:45.333 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.333 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:45.333 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:45.333 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.617 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:45.617 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:45.617 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:45.617 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:45.617 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:45.617 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:45.617 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:45.617 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.617 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:45.874 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.874 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:45.874 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:45.874 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:46.147 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:46.147 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:46.147 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:46.147 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:46.147 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.147 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:46.147 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.147 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:21:46.147 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:21:46.147 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:21:46.147 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:21:46.147 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:21:46.147 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3800655 00:21:46.147 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 3800655 ']' 00:21:46.147 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 3800655 00:21:46.147 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:21:46.147 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:46.147 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3800655 00:21:46.147 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:46.147 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:46.147 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3800655' 00:21:46.147 killing process with pid 3800655 00:21:46.147 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 3800655 00:21:46.147 14:00:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 3800655 00:21:46.147 [2024-07-15 14:00:40.821324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.147 [2024-07-15 14:00:40.821466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.147 [2024-07-15 14:00:40.821500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.147 [2024-07-15 14:00:40.821514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.147 [2024-07-15 14:00:40.821528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.147 [2024-07-15 14:00:40.821551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.147 [2024-07-15 14:00:40.821565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.147 [2024-07-15 14:00:40.821578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.147 [2024-07-15 14:00:40.821591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.147 [2024-07-15 14:00:40.821604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.147 [2024-07-15 14:00:40.821616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.147 [2024-07-15 14:00:40.821629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.147 [2024-07-15 14:00:40.821642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.147 [2024-07-15 14:00:40.821655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.147 [2024-07-15 14:00:40.821667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.147 [2024-07-15 14:00:40.821680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.147 [2024-07-15 14:00:40.821693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.147 [2024-07-15 14:00:40.821705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.147 [2024-07-15 14:00:40.821718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.147 [2024-07-15 14:00:40.821731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.147 [2024-07-15 14:00:40.821754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.147 [2024-07-15 14:00:40.821768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.147 [2024-07-15 14:00:40.821781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.147 [2024-07-15 14:00:40.821801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.147 [2024-07-15 14:00:40.821814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.821827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.821839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.821852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.821866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.821879] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.821892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.821904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.821921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.821934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.821947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.821960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.821972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.821985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.821997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.822010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.822022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.822035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.822047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.822059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.822071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.822084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.822097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.822109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.822122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.822134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.822146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.822158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.822171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.822183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.822196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.822208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.822220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.822233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.822245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.822261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.822275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.822288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.822300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7da80 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.823565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.148 [2024-07-15 14:00:40.823605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.148 [2024-07-15 14:00:40.823623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.148 [2024-07-15 14:00:40.823636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.148 [2024-07-15 14:00:40.823651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.148 [2024-07-15 14:00:40.823664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.148 [2024-07-15 14:00:40.823678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.148 [2024-07-15 14:00:40.823691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.148 [2024-07-15 14:00:40.823704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x582200 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.825885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d80460 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.828304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7e3c0 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.828339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7e3c0 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.828356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7e3c0 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.828370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7e3c0 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.828382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7e3c0 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.828401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7e3c0 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.828415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7e3c0 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.828427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7e3c0 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.828440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7e3c0 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.828453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7e3c0 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.828466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7e3c0 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.828478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7e3c0 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.828491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7e3c0 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.828504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7e3c0 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.828517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7e3c0 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.828529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7e3c0 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.829611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7e880 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.829648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7e880 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.829679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7e880 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.830434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.830462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.830492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.830504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.830517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.830530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.830543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.830556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.830569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.830582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.830595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.830608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.830620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.830638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.830652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.830665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.830677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.830690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.830703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.830716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.830729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.830748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.148 [2024-07-15 14:00:40.830762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.830775] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.830788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.830801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.830813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.830826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.830839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.830851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.830864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.830876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.830888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.830900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.830913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.830925] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.830937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.830949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.830962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.830974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.830996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.831009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.831021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.831034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.831046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.831059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.831072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.831085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.831097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.831110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.831123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.831136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.831149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.831162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.831174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.831186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.831198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.831211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.831224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.831236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.831248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.831260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.831273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ed20 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.832997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.833010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.833030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.833043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.833056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.833068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.833081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.833094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.833107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.833120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.833132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.833145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.833158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.833171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.833184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.833197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.833210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.833229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.833242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.833255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.833272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.833285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.833298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.833311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f1c0 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.834781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.834815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.834829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.834842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.834854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.834867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.834881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.834894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.834907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.834920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.834932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.834944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.834956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.834969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.834981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.834994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.149 [2024-07-15 14:00:40.835426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.835439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.835456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.835470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.835483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.835496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.835510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.835523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.835535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.835548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.835561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.835574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.835587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.835600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.835613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7f660 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.836701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.836726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.836749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.836764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.836787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.836799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.836812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.836824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.836836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.836848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.836861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.836873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.836886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.836898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.836916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.836929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.836943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.836956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.836969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.836981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.837527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7fb20 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.838991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.150 [2024-07-15 14:00:40.839003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.151 [2024-07-15 14:00:40.839015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.151 [2024-07-15 14:00:40.839028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.151 [2024-07-15 14:00:40.839040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.151 [2024-07-15 14:00:40.839053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.151 [2024-07-15 14:00:40.839069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.151 [2024-07-15 14:00:40.839082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.151 [2024-07-15 14:00:40.839094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7ffc0 is same with the state(5) to be set 00:21:46.151 [2024-07-15 14:00:40.839152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.839185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.839203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.839217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.839231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.839244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.839258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.839271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.839284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53b00 is same with the state(5) to be set 00:21:46.151 [2024-07-15 14:00:40.839337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.839357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.839372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.839385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.839399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.839412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.839429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.839442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.839454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b8c90 is same with the state(5) to be set 00:21:46.151 [2024-07-15 14:00:40.839500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.839520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.839535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.839548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.839568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.839582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.839596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.839609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.839632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d8850 is same with the state(5) to be set 00:21:46.151 [2024-07-15 14:00:40.839679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.839699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.839714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.839728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.839750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.839765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.839781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.839794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.839807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa44690 is same with the state(5) to be set 00:21:46.151 [2024-07-15 14:00:40.839855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.839875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.839890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.839903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.839917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.839931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.839945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.839958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.839970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb880 is same with the state(5) to be set 00:21:46.151 [2024-07-15 14:00:40.840015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.840035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.840056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.840074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.840090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.840103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.840116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.840129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.840142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb60980 is same with the state(5) to be set 00:21:46.151 [2024-07-15 14:00:40.840187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.840207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.840222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.840236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.840250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.840263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.840277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.840290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.840303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c0eb0 is same with the state(5) to be set 00:21:46.151 [2024-07-15 14:00:40.840334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x582200 (9): Bad file descriptor 00:21:46.151 [2024-07-15 14:00:40.840386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.840406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.840422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.840435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.840449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.840463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.840477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.151 [2024-07-15 14:00:40.840490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.840503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x496610 is same with the state(5) to be set 00:21:46.151 [2024-07-15 14:00:40.840734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.840765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.840792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.840808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.840824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.840838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.840854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.840867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.840882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.840896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.840912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.840926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.840941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.840955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.840970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.840983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.840999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.841012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.841027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.841041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.841056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.841076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.841092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.841106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.841122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.841144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.841161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.841175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.841191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.841205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.841221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.841234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.841250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.841264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.841280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.841293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.841309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.841323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.841338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.841352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.841368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.841381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.841397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.841410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.841426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.841440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.841455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.841469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.841484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.841498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.841518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.841532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.841547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.841561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.841577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.841590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.841606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.841620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.841636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.841650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.841665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.841679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.841694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.841707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.841723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.841743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.151 [2024-07-15 14:00:40.841761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.151 [2024-07-15 14:00:40.841775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.841796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.841810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.841825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.841838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.841853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.841867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.841882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.841899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.841915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.841928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.841943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.841957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.841971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.841985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.841999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.842013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.842027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.842041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.842055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.842069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.842084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.842097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.842112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.842126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.842141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.842154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.842169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.842182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.842197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.842211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.842226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.842239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.842258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.842272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.842288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.842302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.842317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.842331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.842346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.842360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.842375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.842388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.842404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.842417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.842432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.842446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.842461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.842474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.842490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.842503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.842518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.842532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.842548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.842561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.842577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.842590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.842605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.842622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.842639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.842652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.842744] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa06d40 was disconnected and freed. reset controller. 00:21:46.152 [2024-07-15 14:00:40.842809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.842827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.842847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.842863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.842878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.842892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.842908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.842922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.842937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.842951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.842966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.842980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.842995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.843009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.843024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.843037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.843053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.843066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.843082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.843096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.843111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.843129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.843146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.843159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.843175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.843188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.843204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.843217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.843233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.843247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.843263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.843276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.843291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.843305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.843320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.843333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.843348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.843362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.843378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.843391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.843407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.843420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.843436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.843449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.843465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.843478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.843497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.843512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.843527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.843541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.843556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.843570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.843585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.843605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.843621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.843635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.843651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.843665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.843680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.843694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.843709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.843723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.843744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.843762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.843782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.843796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.843812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.152 [2024-07-15 14:00:40.843826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.152 [2024-07-15 14:00:40.843842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.843856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.843872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.843890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.843907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.843921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.843937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.843951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.843967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.843981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.843999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.844014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.844030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.844044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.844060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.844074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.844090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.844110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.844126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.844141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.844157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.844172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.844188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.844202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.844218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.844232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.844248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.844262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.844282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.844297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.844313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.844327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.844343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.844357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.844373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.844386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.844403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.844417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.844433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.844447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.844463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.844477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.844493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.844507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.844523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.844537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.844553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.844567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.844583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.844602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.844619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.844633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.844649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.844666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.844683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.844697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.844713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.844727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.844749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.844765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.844861] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x983f90 was disconnected and freed. reset controller. 00:21:46.153 [2024-07-15 14:00:40.848079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:46.153 [2024-07-15 14:00:40.848119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:46.153 [2024-07-15 14:00:40.848146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b8c90 (9): Bad file descriptor 00:21:46.153 [2024-07-15 14:00:40.848168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c0eb0 (9): Bad file descriptor 00:21:46.153 [2024-07-15 14:00:40.848237] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:46.153 [2024-07-15 14:00:40.849092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.849123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.849147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.849163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.849180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.849195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.849211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.849225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.849242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.849255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.849272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.849286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.849302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.849316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.849339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.849356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.849372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.849386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.849403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.849417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.849434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.849448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.849463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.849477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.849492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.849506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.849522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.849535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.849551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.849564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.849580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.849594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.849609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.849623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.849638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.849652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.849667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.849681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.849696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.849713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.849729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.849752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.849770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.849783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.849798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.849812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.849828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.849842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.849858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.849872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.849887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.849901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.849917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.849931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.849946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.849959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.849974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.849988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.850004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.850018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.850033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.850047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.850062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.850076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.850095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.850110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.850125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.850139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.850154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.850168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.850183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.850197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.850212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.850226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.850242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.850255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.850271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.850284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.850300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.153 [2024-07-15 14:00:40.850315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.153 [2024-07-15 14:00:40.850331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.850344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.850360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.850374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.850389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.850403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.850418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.850432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.850447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.850469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.850485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.850499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.850515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.850528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.850544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.850557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.850572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.850586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.850601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.850615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.850631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.850644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.850660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.850673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.850689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.850703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.850719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.850732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.850755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.850770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.850786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.850800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.850816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.850834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.850851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.850866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.850881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.850895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.850910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.850924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.850940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.850953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.850969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.850983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.850999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.851013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.851029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.851043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.851083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:46.154 [2024-07-15 14:00:40.851165] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa058b0 was disconnected and freed. reset controller. 00:21:46.154 [2024-07-15 14:00:40.851788] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:46.154 [2024-07-15 14:00:40.851866] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:46.154 [2024-07-15 14:00:40.851956] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:46.154 [2024-07-15 14:00:40.852026] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:46.154 [2024-07-15 14:00:40.852168] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:46.154 [2024-07-15 14:00:40.852479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.154 [2024-07-15 14:00:40.852508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c0eb0 with addr=10.0.0.2, port=4420 00:21:46.154 [2024-07-15 14:00:40.852525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c0eb0 is same with the state(5) to be set 00:21:46.154 [2024-07-15 14:00:40.852770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.154 [2024-07-15 14:00:40.852797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b8c90 with addr=10.0.0.2, port=4420 00:21:46.154 [2024-07-15 14:00:40.852812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b8c90 is same with the state(5) to be set 00:21:46.154 [2024-07-15 14:00:40.852866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.154 [2024-07-15 14:00:40.852886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.852902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.154 [2024-07-15 14:00:40.852916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.852932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.154 [2024-07-15 14:00:40.852944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.852958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.154 [2024-07-15 14:00:40.852971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.852983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53920 is same with the state(5) to be set 00:21:46.154 [2024-07-15 14:00:40.853021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb53b00 (9): Bad file descriptor 00:21:46.154 [2024-07-15 14:00:40.853066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d8850 (9): Bad file descriptor 00:21:46.154 [2024-07-15 14:00:40.853098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa44690 (9): Bad file descriptor 00:21:46.154 [2024-07-15 14:00:40.853131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cb880 (9): Bad file descriptor 00:21:46.154 [2024-07-15 14:00:40.853161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb60980 (9): Bad file descriptor 00:21:46.154 [2024-07-15 14:00:40.853194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x496610 (9): Bad file descriptor 00:21:46.154 [2024-07-15 14:00:40.854514] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:46.154 [2024-07-15 14:00:40.854569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:46.154 [2024-07-15 14:00:40.854614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c0eb0 (9): Bad file descriptor 00:21:46.154 [2024-07-15 14:00:40.854637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b8c90 (9): Bad file descriptor 00:21:46.154 [2024-07-15 14:00:40.854706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.854734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.854764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.854780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.854796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.854810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.854826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.854840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.854862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.854877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.854893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.854906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.854922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.854936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.854952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.854966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.854982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.854996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.855012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.855025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.855042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.855056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.855071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.855085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.855101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.855114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.855130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.855144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.855159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.855174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.855190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.855203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.855220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.855237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.855254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.855267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.855283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.855297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.855313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.855327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.855342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.855356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.855372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.855386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.855401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.855415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.855431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.855444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.855460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.855474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.855489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.855503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.855518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.855532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.855547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.855560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.855575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.855589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.855608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.855623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.855638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.855651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.855667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.855680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.855696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.855709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.154 [2024-07-15 14:00:40.855724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.154 [2024-07-15 14:00:40.855994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.856904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.856918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04610 is same with the state(5) to be set 00:21:46.155 [2024-07-15 14:00:40.858239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.155 [2024-07-15 14:00:40.858556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.155 [2024-07-15 14:00:40.858583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb60980 with addr=10.0.0.2, port=4420 00:21:46.155 [2024-07-15 14:00:40.858599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb60980 is same with the state(5) to be set 00:21:46.155 [2024-07-15 14:00:40.858615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:46.155 [2024-07-15 14:00:40.858633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:46.155 [2024-07-15 14:00:40.858649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:46.155 [2024-07-15 14:00:40.858672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:46.155 [2024-07-15 14:00:40.858686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:46.155 [2024-07-15 14:00:40.858699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:46.155 [2024-07-15 14:00:40.859058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.155 [2024-07-15 14:00:40.859082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.155 [2024-07-15 14:00:40.859339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.155 [2024-07-15 14:00:40.859365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x582200 with addr=10.0.0.2, port=4420 00:21:46.155 [2024-07-15 14:00:40.859381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x582200 is same with the state(5) to be set 00:21:46.155 [2024-07-15 14:00:40.859400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb60980 (9): Bad file descriptor 00:21:46.155 [2024-07-15 14:00:40.859768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x582200 (9): Bad file descriptor 00:21:46.155 [2024-07-15 14:00:40.859794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:46.155 [2024-07-15 14:00:40.859808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:46.155 [2024-07-15 14:00:40.859821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:46.155 [2024-07-15 14:00:40.859891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:46.155 [2024-07-15 14:00:40.859914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:46.155 [2024-07-15 14:00:40.859932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.155 [2024-07-15 14:00:40.859960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.155 [2024-07-15 14:00:40.859975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.155 [2024-07-15 14:00:40.859989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.155 [2024-07-15 14:00:40.860048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.155 [2024-07-15 14:00:40.860318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.155 [2024-07-15 14:00:40.860344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b8c90 with addr=10.0.0.2, port=4420 00:21:46.155 [2024-07-15 14:00:40.860359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b8c90 is same with the state(5) to be set 00:21:46.155 [2024-07-15 14:00:40.860533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.155 [2024-07-15 14:00:40.860558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c0eb0 with addr=10.0.0.2, port=4420 00:21:46.155 [2024-07-15 14:00:40.860576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c0eb0 is same with the state(5) to be set 00:21:46.155 [2024-07-15 14:00:40.860630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b8c90 (9): Bad file descriptor 00:21:46.155 [2024-07-15 14:00:40.860652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c0eb0 (9): Bad file descriptor 00:21:46.155 [2024-07-15 14:00:40.860701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:46.155 [2024-07-15 14:00:40.860732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:46.155 [2024-07-15 14:00:40.860755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:46.155 [2024-07-15 14:00:40.860775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:46.155 [2024-07-15 14:00:40.860789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:46.155 [2024-07-15 14:00:40.860802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:46.155 [2024-07-15 14:00:40.860853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.155 [2024-07-15 14:00:40.860871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.155 [2024-07-15 14:00:40.862249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb53920 (9): Bad file descriptor 00:21:46.155 [2024-07-15 14:00:40.862437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.862462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.862486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.862502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.862518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.862532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.862548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.862562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.862577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.862590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.862606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.862620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.862635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.862649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.862671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.862684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.862699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.862713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.862734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.862758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.862779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.862793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.862809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.862822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.862837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.862851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.862866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.862880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.862896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.862909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.862924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.862938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.862954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.155 [2024-07-15 14:00:40.862967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.155 [2024-07-15 14:00:40.862992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.863961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.863986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.864001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.864015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.864030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.864044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.864060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.864073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.864088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.864102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.864118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.864131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.864147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.864160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.864176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.864189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.864205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.864218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.864234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.864248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.864267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.864282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.864297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.864311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.864327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.864341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.864357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.864371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.864387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.864400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.864414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x991320 is same with the state(5) to be set 00:21:46.156 [2024-07-15 14:00:40.865694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.865717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.865745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.865762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.865780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.865793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.865809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.865823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.865838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.865851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.865867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.865881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.865897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.865910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.865926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.865948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.865965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.865989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.866005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.866019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.866035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.866048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.866064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.866078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.866094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.866107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.866123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.866137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.866152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.866166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.866183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.866196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.866212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.866225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.866241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.866255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.866270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.866284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.866299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.866313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.866333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.866347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.866363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.866376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.866392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.866405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.866421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.866434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.156 [2024-07-15 14:00:40.866450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.156 [2024-07-15 14:00:40.866463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.866478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.866493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.866509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.866523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.866538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.866551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.866567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.866580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.866596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.866609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.866625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.866639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.866654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.866667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.866683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.866700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.866717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.866730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.866752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.866774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.866789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.866802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.866818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.866832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.866848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.866861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.866877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.866891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.866906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.866919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.866935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.866948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.866964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.866977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.866997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.867011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.867026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.867039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.867055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.867068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.867087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.867102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.867118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.867132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.867147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.867161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.867176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.867190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.867205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.867219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.867235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.867248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.867264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.867278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.867293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.867307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.867322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.867336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.867352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.867365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.867380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.867393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.867409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.867423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.867438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.867455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.867472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.867486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.867502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.867515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.867531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.867545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.867560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.867574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.867590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.867603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.867619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.867633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.867647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb29b40 is same with the state(5) to be set 00:21:46.157 [2024-07-15 14:00:40.868925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.868947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.868967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.868988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.869005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.869019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.869035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.869049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.869064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.869081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.869096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.869115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.869131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.869146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.869162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.869176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.869192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.869205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.869220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.869234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.869249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.869263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.869279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.869292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.869308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.869322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.869338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.869351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.869366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.869380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.869396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.869409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.869425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.869438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.869454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.869467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.869486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.869501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.869516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.869529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.869545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.869558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.869573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.869587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.869602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.869616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.869631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.869644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.869660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.869673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.869689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.869703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.869718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.869732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.157 [2024-07-15 14:00:40.869755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.157 [2024-07-15 14:00:40.869770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.869786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.869800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.869815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.869828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.869844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.869861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.869878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.869891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.869906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.869920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.869935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.869949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.869964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.869977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.869993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.870006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.870022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.870035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.870051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.870064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.870080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.870093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.870109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.870122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.870138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.870151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.870167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.870180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.870196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.870209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.870228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.870243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.870260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.870273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.870289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.870302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.870318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.870332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.870347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.870360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.870376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.870389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.870404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.870418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.870433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.870447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.870463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.870476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.870491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.870505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.870520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.870534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.870550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.870563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.870578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.870595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.870611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.870625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.870640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.870654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.870669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.870682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.870698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.870711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.870727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.870748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.870765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.870778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.870794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.870807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.870822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.870835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.870849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36e40 is same with the state(5) to be set 00:21:46.158 [2024-07-15 14:00:40.872095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.872118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.872138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.872153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.872169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.872184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.872200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.872219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.872236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.872249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.872265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.872279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.872295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.872308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.872324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.872337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.872352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.872366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.872381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.872394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.872410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.872423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.872439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.872453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.872469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.872482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.872498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.872511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.872527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.872541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.872556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.872570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.872590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.872605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.872621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.872635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.872650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.872664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.872679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.872693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.872709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.872722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.872746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.872763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.872780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.872796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.872812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.872826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.872842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.872856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.872871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.872885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.872901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.872915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.872931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.872945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.872960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.872978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.872994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.873008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.873024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.873037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.873052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.873066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.873081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.873095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.873118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.873132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.873147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.873160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.158 [2024-07-15 14:00:40.873176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.158 [2024-07-15 14:00:40.873189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.873205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.873220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.873236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.873250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.873267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.873280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.873296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.873310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.873326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.873340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.873355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.873374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.873391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.873405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.873421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.873435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.873453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.873467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.873484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.873498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.873513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.873529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.873544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.873558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.873574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.873588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.873605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.873619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.873635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.873649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.873665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.873679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.873695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.873709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.873725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.873744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.873765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.873780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.873796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.873810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.873826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.873840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.873855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.873870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.873886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.873900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.873915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.873929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.873944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.873958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.873973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.873995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.874010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.874024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.874039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.874053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.874068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb38290 is same with the state(5) to be set 00:21:46.159 [2024-07-15 14:00:40.875335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.875358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.875379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.875394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.875420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.875435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.875451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.875465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.875480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.875493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.875509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.875522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.875538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.875551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.875566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.875583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.875598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.875611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.875627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.875648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.875663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.875677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.875694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.875708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.875723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.875743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.875761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.875779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.875794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.875812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.875828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.875842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.875857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.875871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.875887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.875900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.875916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.875929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.875944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.875958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.875973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.875996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.876012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.876025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.876041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.876054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.876070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.876083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.876099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.876112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.876128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.876141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.876156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.876170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.876189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.876203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.876219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.876232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.876248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.876262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.876277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.876290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.876305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.876318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.876333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.876347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.876362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.876375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.876391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.876404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.876419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.876433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.876448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.876461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.876477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.876490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.876506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.876519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.876535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.876552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.876567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.876581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.876596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.876610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.876625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.876638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.876653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.159 [2024-07-15 14:00:40.876667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.159 [2024-07-15 14:00:40.876683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.876696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.876711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.876725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.876746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.876761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.876784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.876798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.876813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.876826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.876842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.876855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.876870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.876884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.876899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.876912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.876932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.876946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.876961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.876975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.876992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.877005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.877020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.877034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.877049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.877063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.877078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.877091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.877107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.877121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.877136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.877149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.877165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.877181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.877197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.877221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.877237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.877251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.877267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.877280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.877294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e1e90 is same with the state(5) to be set 00:21:46.160 [2024-07-15 14:00:40.878973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:46.160 [2024-07-15 14:00:40.879007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:46.160 [2024-07-15 14:00:40.879026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:46.160 [2024-07-15 14:00:40.879045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:46.160 [2024-07-15 14:00:40.879183] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:46.160 [2024-07-15 14:00:40.879289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:46.160 [2024-07-15 14:00:40.879559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.160 [2024-07-15 14:00:40.879588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cb880 with addr=10.0.0.2, port=4420 00:21:46.160 [2024-07-15 14:00:40.879605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cb880 is same with the state(5) to be set 00:21:46.160 [2024-07-15 14:00:40.879716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.160 [2024-07-15 14:00:40.879756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d8850 with addr=10.0.0.2, port=4420 00:21:46.160 [2024-07-15 14:00:40.879775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d8850 is same with the state(5) to be set 00:21:46.160 [2024-07-15 14:00:40.879875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.160 [2024-07-15 14:00:40.879899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x496610 with addr=10.0.0.2, port=4420 00:21:46.160 [2024-07-15 14:00:40.879915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x496610 is same with the state(5) to be set 00:21:46.160 [2024-07-15 14:00:40.880045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.160 [2024-07-15 14:00:40.880069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb53b00 with addr=10.0.0.2, port=4420 00:21:46.160 [2024-07-15 14:00:40.880085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53b00 is same with the state(5) to be set 00:21:46.160 [2024-07-15 14:00:40.881201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.881226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.881250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.881266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.881283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.881297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.881313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.881327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.881343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.881357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.881381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.881396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.881412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.881426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.881443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.881457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.881473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.881487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.881503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.881517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.881533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.881547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.881562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.881576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.881592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.881605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.881622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.881636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.881652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.881666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.881682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.881696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.881712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.881726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.881750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.881770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.881786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.881800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.881816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.881830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.881846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.881859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.881875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.881889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.881905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.881919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.881935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.881949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.881965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.881979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.881994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.882008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.882024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.882038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.882053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.882067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.160 [2024-07-15 14:00:40.882083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.160 [2024-07-15 14:00:40.882096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.882985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.882999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.883015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.883029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.883044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.883058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.883073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.883087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.883102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.883116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.883131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.161 [2024-07-15 14:00:40.883145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.161 [2024-07-15 14:00:40.883159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb39670 is same with the state(5) to be set 00:21:46.161 [2024-07-15 14:00:40.885621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:46.161 [2024-07-15 14:00:40.885654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.161 [2024-07-15 14:00:40.885673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:46.161 [2024-07-15 14:00:40.885691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:46.161 task offset: 24576 on job bdev=Nvme3n1 fails 00:21:46.161 00:21:46.161 Latency(us) 00:21:46.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.161 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.161 Job: Nvme1n1 ended in about 0.91 seconds with error 00:21:46.161 Verification LBA range: start 0x0 length 0x400 00:21:46.161 Nvme1n1 : 0.91 160.19 10.01 70.70 0.00 274064.28 18544.26 267192.70 00:21:46.161 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.161 Job: Nvme2n1 ended in about 0.90 seconds with error 00:21:46.161 Verification LBA range: start 0x0 length 0x400 00:21:46.161 Nvme2n1 : 0.90 212.98 13.31 70.99 0.00 218195.06 30680.56 245444.46 00:21:46.161 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.161 Job: Nvme3n1 ended in about 0.89 seconds with error 00:21:46.161 Verification LBA range: start 0x0 length 0x400 00:21:46.161 Nvme3n1 : 0.89 214.83 13.43 71.61 0.00 211667.91 7573.05 264085.81 00:21:46.161 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.161 Job: Nvme4n1 ended in about 0.89 seconds with error 00:21:46.161 Verification LBA range: start 0x0 length 0x400 00:21:46.161 Nvme4n1 : 0.89 214.56 13.41 71.52 0.00 207294.39 29321.29 250104.79 00:21:46.161 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.161 Job: Nvme5n1 ended in about 0.91 seconds with error 00:21:46.161 Verification LBA range: start 0x0 length 0x400 00:21:46.161 Nvme5n1 : 0.91 140.24 8.77 70.12 0.00 276304.28 18932.62 273406.48 00:21:46.161 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.161 Job: Nvme6n1 ended in about 0.92 seconds with error 00:21:46.161 Verification LBA range: start 0x0 length 0x400 00:21:46.161 Nvme6n1 : 0.92 139.75 8.73 69.87 0.00 271371.12 19029.71 250104.79 00:21:46.161 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.161 Job: Nvme7n1 ended in about 0.92 seconds with error 00:21:46.161 Verification LBA range: start 0x0 length 0x400 00:21:46.161 Nvme7n1 : 0.92 139.27 8.70 69.63 0.00 266315.85 33010.73 240784.12 00:21:46.161 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.161 Job: Nvme8n1 ended in about 0.92 seconds with error 00:21:46.161 Verification LBA range: start 0x0 length 0x400 00:21:46.161 Nvme8n1 : 0.92 138.78 8.67 69.39 0.00 261394.33 18641.35 253211.69 00:21:46.161 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.161 Job: Nvme9n1 ended in about 0.93 seconds with error 00:21:46.161 Verification LBA range: start 0x0 length 0x400 00:21:46.161 Nvme9n1 : 0.93 137.43 8.59 68.71 0.00 258485.67 24660.95 270299.59 00:21:46.161 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.161 Job: Nvme10n1 ended in about 0.93 seconds with error 00:21:46.161 Verification LBA range: start 0x0 length 0x400 00:21:46.161 Nvme10n1 : 0.93 138.30 8.64 69.15 0.00 250810.22 17767.54 290494.39 00:21:46.161 =================================================================================================================== 00:21:46.161 Total : 1636.33 102.27 701.71 0.00 246430.51 7573.05 290494.39 00:21:46.161 [2024-07-15 14:00:40.912841] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:46.161 [2024-07-15 14:00:40.912925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:46.161 [2024-07-15 14:00:40.913241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.161 [2024-07-15 14:00:40.913277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa44690 with addr=10.0.0.2, port=4420 00:21:46.161 [2024-07-15 14:00:40.913297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa44690 is same with the state(5) to be set 00:21:46.161 [2024-07-15 14:00:40.913324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cb880 (9): Bad file descriptor 00:21:46.161 [2024-07-15 14:00:40.913346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d8850 (9): Bad file descriptor 00:21:46.161 [2024-07-15 14:00:40.913365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x496610 (9): Bad file descriptor 00:21:46.161 [2024-07-15 14:00:40.913383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb53b00 (9): Bad file descriptor 00:21:46.161 [2024-07-15 14:00:40.913708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.161 [2024-07-15 14:00:40.913745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb60980 with addr=10.0.0.2, port=4420 00:21:46.161 [2024-07-15 14:00:40.913765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb60980 is same with the state(5) to be set 00:21:46.161 [2024-07-15 14:00:40.913870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.161 [2024-07-15 14:00:40.913896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x582200 with addr=10.0.0.2, port=4420 00:21:46.161 [2024-07-15 14:00:40.913911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x582200 is same with the state(5) to be set 00:21:46.161 [2024-07-15 14:00:40.914057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.161 [2024-07-15 14:00:40.914082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c0eb0 with addr=10.0.0.2, port=4420 00:21:46.161 [2024-07-15 14:00:40.914097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c0eb0 is same with the state(5) to be set 00:21:46.161 [2024-07-15 14:00:40.914218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.161 [2024-07-15 14:00:40.914244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b8c90 with addr=10.0.0.2, port=4420 00:21:46.161 [2024-07-15 14:00:40.914260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b8c90 is same with the state(5) to be set 00:21:46.161 [2024-07-15 14:00:40.914398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.161 [2024-07-15 14:00:40.914422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb53920 with addr=10.0.0.2, port=4420 00:21:46.161 [2024-07-15 14:00:40.914438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53920 is same with the state(5) to be set 00:21:46.161 [2024-07-15 14:00:40.914456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa44690 (9): Bad file descriptor 00:21:46.161 [2024-07-15 14:00:40.914473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:46.161 [2024-07-15 14:00:40.914486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:46.161 [2024-07-15 14:00:40.914502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:46.161 [2024-07-15 14:00:40.914522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:46.161 [2024-07-15 14:00:40.914536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:46.161 [2024-07-15 14:00:40.914548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:46.161 [2024-07-15 14:00:40.914566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:46.161 [2024-07-15 14:00:40.914580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:46.161 [2024-07-15 14:00:40.914593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:46.161 [2024-07-15 14:00:40.914610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:46.161 [2024-07-15 14:00:40.914623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:46.161 [2024-07-15 14:00:40.914636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:46.161 [2024-07-15 14:00:40.914668] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:46.161 [2024-07-15 14:00:40.914689] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:46.161 [2024-07-15 14:00:40.914713] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:46.161 [2024-07-15 14:00:40.914734] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:46.161 [2024-07-15 14:00:40.914763] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:46.161 [2024-07-15 14:00:40.915144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.161 [2024-07-15 14:00:40.915168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.161 [2024-07-15 14:00:40.915181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.161 [2024-07-15 14:00:40.915193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.161 [2024-07-15 14:00:40.915209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb60980 (9): Bad file descriptor 00:21:46.161 [2024-07-15 14:00:40.915227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x582200 (9): Bad file descriptor 00:21:46.161 [2024-07-15 14:00:40.915245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c0eb0 (9): Bad file descriptor 00:21:46.161 [2024-07-15 14:00:40.915262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b8c90 (9): Bad file descriptor 00:21:46.161 [2024-07-15 14:00:40.915279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb53920 (9): Bad file descriptor 00:21:46.161 [2024-07-15 14:00:40.915294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:46.162 [2024-07-15 14:00:40.915306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:46.162 [2024-07-15 14:00:40.915319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:46.162 [2024-07-15 14:00:40.915377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.162 [2024-07-15 14:00:40.915397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:46.162 [2024-07-15 14:00:40.915411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:46.162 [2024-07-15 14:00:40.915424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:46.162 [2024-07-15 14:00:40.915440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.162 [2024-07-15 14:00:40.915454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.162 [2024-07-15 14:00:40.915467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.162 [2024-07-15 14:00:40.915482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:46.162 [2024-07-15 14:00:40.915495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:46.162 [2024-07-15 14:00:40.915508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:46.162 [2024-07-15 14:00:40.915524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:46.162 [2024-07-15 14:00:40.915538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:46.162 [2024-07-15 14:00:40.915551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:46.162 [2024-07-15 14:00:40.915566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:46.162 [2024-07-15 14:00:40.915579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:46.162 [2024-07-15 14:00:40.915598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:46.162 [2024-07-15 14:00:40.915644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.162 [2024-07-15 14:00:40.915664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.162 [2024-07-15 14:00:40.915676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.162 [2024-07-15 14:00:40.915687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.162 [2024-07-15 14:00:40.915698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.729 14:00:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:21:46.729 14:00:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:21:47.687 14:00:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3800716 00:21:47.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3800716) - No such process 00:21:47.687 14:00:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:21:47.687 14:00:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:21:47.687 14:00:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:47.687 14:00:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:47.687 14:00:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:47.687 14:00:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:47.687 14:00:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:47.687 14:00:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:21:47.687 14:00:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:47.687 14:00:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:21:47.687 14:00:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:47.687 14:00:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:47.687 rmmod nvme_tcp 00:21:47.687 rmmod nvme_fabrics 00:21:47.687 rmmod nvme_keyring 00:21:47.687 14:00:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:47.687 14:00:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:21:47.687 14:00:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:21:47.687 14:00:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:47.687 14:00:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:47.687 14:00:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:47.687 14:00:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:47.687 14:00:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:47.687 14:00:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:47.687 14:00:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.687 14:00:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.687 14:00:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.230 14:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:50.230 00:21:50.230 real 0m7.796s 00:21:50.230 user 0m19.760s 00:21:50.230 sys 0m1.489s 00:21:50.230 14:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:50.230 14:00:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:50.230 ************************************ 00:21:50.230 END TEST nvmf_shutdown_tc3 00:21:50.230 ************************************ 00:21:50.230 14:00:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:50.230 14:00:44 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:50.230 00:21:50.230 real 0m27.646s 00:21:50.230 user 1m17.797s 00:21:50.230 sys 0m6.327s 00:21:50.230 14:00:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:50.230 14:00:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:50.230 ************************************ 00:21:50.230 END TEST nvmf_shutdown 00:21:50.230 ************************************ 00:21:50.230 14:00:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:50.230 14:00:44 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:21:50.230 14:00:44 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:50.230 14:00:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:50.230 14:00:44 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:21:50.230 14:00:44 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:50.230 14:00:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:50.230 14:00:44 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:21:50.230 14:00:44 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:50.230 14:00:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:50.230 14:00:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:50.230 14:00:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:50.230 ************************************ 00:21:50.230 START TEST nvmf_multicontroller 00:21:50.230 ************************************ 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:50.230 * Looking for test storage... 00:21:50.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:50.230 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:50.231 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:50.231 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:50.231 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:50.231 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:50.231 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:50.231 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:50.231 14:00:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:50.231 14:00:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:50.231 14:00:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:50.231 14:00:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:50.231 14:00:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:50.231 14:00:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:50.231 14:00:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:50.231 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:50.231 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:50.231 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:50.231 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:50.231 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:50.231 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.231 14:00:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:50.231 14:00:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.231 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:50.231 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:50.231 14:00:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:21:50.231 14:00:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:52.141 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:52.141 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:52.141 Found net devices under 0000:84:00.0: cvl_0_0 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:52.141 Found net devices under 0000:84:00.1: cvl_0_1 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:52.141 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:52.142 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:52.142 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:52.142 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:52.142 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:52.142 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:52.142 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:52.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:52.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:21:52.142 00:21:52.142 --- 10.0.0.2 ping statistics --- 00:21:52.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.142 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:21:52.142 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:52.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:21:52.142 00:21:52.142 --- 10.0.0.1 ping statistics --- 00:21:52.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.142 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:21:52.142 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.142 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:21:52.142 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:52.142 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:52.142 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:52.142 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:52.142 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:52.142 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:52.142 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:52.142 14:00:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:52.142 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:52.142 14:00:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:52.142 14:00:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.142 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3803254 00:21:52.142 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:52.142 14:00:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3803254 00:21:52.142 14:00:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3803254 ']' 00:21:52.142 14:00:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.142 14:00:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:52.142 14:00:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.142 14:00:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:52.142 14:00:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.142 [2024-07-15 14:00:46.866934] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:21:52.142 [2024-07-15 14:00:46.867021] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.142 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.142 [2024-07-15 14:00:46.929402] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:52.399 [2024-07-15 14:00:47.039780] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.399 [2024-07-15 14:00:47.039831] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.399 [2024-07-15 14:00:47.039846] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.399 [2024-07-15 14:00:47.039858] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.399 [2024-07-15 14:00:47.039869] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.399 [2024-07-15 14:00:47.040019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.399 [2024-07-15 14:00:47.040080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:52.399 [2024-07-15 14:00:47.040083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.399 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:52.399 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:52.399 14:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:52.399 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:52.399 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.400 14:00:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.400 14:00:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:52.400 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.400 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.400 [2024-07-15 14:00:47.185820] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.400 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.400 14:00:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:52.400 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.400 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.400 Malloc0 00:21:52.400 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.400 14:00:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:52.400 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.400 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.400 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.400 14:00:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:52.400 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.400 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.657 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.657 14:00:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:52.657 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.657 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.657 [2024-07-15 14:00:47.247150] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.657 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.657 14:00:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:52.657 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.657 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.657 [2024-07-15 14:00:47.255055] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:52.657 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.657 14:00:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:52.657 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.657 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.657 Malloc1 00:21:52.657 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.657 14:00:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:52.657 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.657 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.658 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.658 14:00:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:52.658 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.658 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.658 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.658 14:00:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:52.658 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.658 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.658 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.658 14:00:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:52.658 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.658 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.658 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.658 14:00:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3803397 00:21:52.658 14:00:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:52.658 14:00:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:52.658 14:00:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3803397 /var/tmp/bdevperf.sock 00:21:52.658 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3803397 ']' 00:21:52.658 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:52.658 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:52.658 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:52.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:52.658 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:52.658 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.915 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:52.915 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:52.915 14:00:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:52.915 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.915 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.173 NVMe0n1 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.174 1 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.174 request: 00:21:53.174 { 00:21:53.174 "name": "NVMe0", 00:21:53.174 "trtype": "tcp", 00:21:53.174 "traddr": "10.0.0.2", 00:21:53.174 "adrfam": "ipv4", 00:21:53.174 "trsvcid": "4420", 00:21:53.174 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.174 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:53.174 "hostaddr": "10.0.0.2", 00:21:53.174 "hostsvcid": "60000", 00:21:53.174 "prchk_reftag": false, 00:21:53.174 "prchk_guard": false, 00:21:53.174 "hdgst": false, 00:21:53.174 "ddgst": false, 00:21:53.174 "method": "bdev_nvme_attach_controller", 00:21:53.174 "req_id": 1 00:21:53.174 } 00:21:53.174 Got JSON-RPC error response 00:21:53.174 response: 00:21:53.174 { 00:21:53.174 "code": -114, 00:21:53.174 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:53.174 } 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.174 request: 00:21:53.174 { 00:21:53.174 "name": "NVMe0", 00:21:53.174 "trtype": "tcp", 00:21:53.174 "traddr": "10.0.0.2", 00:21:53.174 "adrfam": "ipv4", 00:21:53.174 "trsvcid": "4420", 00:21:53.174 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:53.174 "hostaddr": "10.0.0.2", 00:21:53.174 "hostsvcid": "60000", 00:21:53.174 "prchk_reftag": false, 00:21:53.174 "prchk_guard": false, 00:21:53.174 "hdgst": false, 00:21:53.174 "ddgst": false, 00:21:53.174 "method": "bdev_nvme_attach_controller", 00:21:53.174 "req_id": 1 00:21:53.174 } 00:21:53.174 Got JSON-RPC error response 00:21:53.174 response: 00:21:53.174 { 00:21:53.174 "code": -114, 00:21:53.174 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:53.174 } 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.174 request: 00:21:53.174 { 00:21:53.174 "name": "NVMe0", 00:21:53.174 "trtype": "tcp", 00:21:53.174 "traddr": "10.0.0.2", 00:21:53.174 "adrfam": "ipv4", 00:21:53.174 "trsvcid": "4420", 00:21:53.174 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.174 "hostaddr": "10.0.0.2", 00:21:53.174 "hostsvcid": "60000", 00:21:53.174 "prchk_reftag": false, 00:21:53.174 "prchk_guard": false, 00:21:53.174 "hdgst": false, 00:21:53.174 "ddgst": false, 00:21:53.174 "multipath": "disable", 00:21:53.174 "method": "bdev_nvme_attach_controller", 00:21:53.174 "req_id": 1 00:21:53.174 } 00:21:53.174 Got JSON-RPC error response 00:21:53.174 response: 00:21:53.174 { 00:21:53.174 "code": -114, 00:21:53.174 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:21:53.174 } 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.174 request: 00:21:53.174 { 00:21:53.174 "name": "NVMe0", 00:21:53.174 "trtype": "tcp", 00:21:53.174 "traddr": "10.0.0.2", 00:21:53.174 "adrfam": "ipv4", 00:21:53.174 "trsvcid": "4420", 00:21:53.174 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.174 "hostaddr": "10.0.0.2", 00:21:53.174 "hostsvcid": "60000", 00:21:53.174 "prchk_reftag": false, 00:21:53.174 "prchk_guard": false, 00:21:53.174 "hdgst": false, 00:21:53.174 "ddgst": false, 00:21:53.174 "multipath": "failover", 00:21:53.174 "method": "bdev_nvme_attach_controller", 00:21:53.174 "req_id": 1 00:21:53.174 } 00:21:53.174 Got JSON-RPC error response 00:21:53.174 response: 00:21:53.174 { 00:21:53.174 "code": -114, 00:21:53.174 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:53.174 } 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.174 14:00:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.431 00:21:53.431 14:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.431 14:00:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:53.431 14:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.431 14:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.431 14:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.431 14:00:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:53.431 14:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.431 14:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.431 00:21:53.431 14:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.431 14:00:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:53.431 14:00:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:53.431 14:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.431 14:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.431 14:00:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.431 14:00:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:53.431 14:00:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:54.802 0 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3803397 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3803397 ']' 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3803397 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3803397 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3803397' 00:21:54.802 killing process with pid 3803397 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3803397 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3803397 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:21:54.802 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:54.802 [2024-07-15 14:00:47.359103] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:21:54.802 [2024-07-15 14:00:47.359188] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3803397 ] 00:21:54.802 EAL: No free 2048 kB hugepages reported on node 1 00:21:54.802 [2024-07-15 14:00:47.419127] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.802 [2024-07-15 14:00:47.528786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.802 [2024-07-15 14:00:48.167255] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 4cd3c4d8-fb07-4da0-a163-89fc0a3bb2eb already exists 00:21:54.802 [2024-07-15 14:00:48.167296] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:4cd3c4d8-fb07-4da0-a163-89fc0a3bb2eb alias for bdev NVMe1n1 00:21:54.802 [2024-07-15 14:00:48.167326] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:54.802 Running I/O for 1 seconds... 00:21:54.802 00:21:54.802 Latency(us) 00:21:54.802 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.802 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:54.802 NVMe0n1 : 1.00 19181.74 74.93 0.00 0.00 6654.02 3786.52 12913.02 00:21:54.802 =================================================================================================================== 00:21:54.802 Total : 19181.74 74.93 0.00 0.00 6654.02 3786.52 12913.02 00:21:54.802 Received shutdown signal, test time was about 1.000000 seconds 00:21:54.802 00:21:54.802 Latency(us) 00:21:54.802 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.802 =================================================================================================================== 00:21:54.802 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:54.802 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:54.802 14:00:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:54.803 14:00:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:21:55.059 14:00:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:55.059 14:00:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:21:55.059 14:00:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:55.059 14:00:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:55.059 rmmod nvme_tcp 00:21:55.059 rmmod nvme_fabrics 00:21:55.059 rmmod nvme_keyring 00:21:55.059 14:00:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:55.059 14:00:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:21:55.059 14:00:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:21:55.059 14:00:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3803254 ']' 00:21:55.059 14:00:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3803254 00:21:55.059 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3803254 ']' 00:21:55.059 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3803254 00:21:55.059 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:21:55.059 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:55.059 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3803254 00:21:55.059 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:55.059 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:55.060 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3803254' 00:21:55.060 killing process with pid 3803254 00:21:55.060 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3803254 00:21:55.060 14:00:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3803254 00:21:55.317 14:00:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:55.317 14:00:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:55.317 14:00:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:55.317 14:00:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:55.317 14:00:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:55.317 14:00:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.317 14:00:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:55.317 14:00:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.846 14:00:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:57.846 00:21:57.846 real 0m7.495s 00:21:57.846 user 0m11.870s 00:21:57.846 sys 0m2.285s 00:21:57.846 14:00:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:57.846 14:00:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:57.846 ************************************ 00:21:57.846 END TEST nvmf_multicontroller 00:21:57.846 ************************************ 00:21:57.846 14:00:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:57.846 14:00:52 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:57.846 14:00:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:57.846 14:00:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:57.846 14:00:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:57.846 ************************************ 00:21:57.846 START TEST nvmf_aer 00:21:57.846 ************************************ 00:21:57.846 14:00:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:57.846 * Looking for test storage... 00:21:57.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:21:57.847 14:00:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:59.746 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:59.746 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:59.747 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:59.747 Found net devices under 0000:84:00.0: cvl_0_0 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:59.747 Found net devices under 0000:84:00.1: cvl_0_1 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:59.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:21:59.747 00:21:59.747 --- 10.0.0.2 ping statistics --- 00:21:59.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.747 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:59.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:21:59.747 00:21:59.747 --- 10.0.0.1 ping statistics --- 00:21:59.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.747 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3805623 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3805623 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 3805623 ']' 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:59.747 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:59.747 [2024-07-15 14:00:54.459983] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:21:59.747 [2024-07-15 14:00:54.460061] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.747 EAL: No free 2048 kB hugepages reported on node 1 00:21:59.747 [2024-07-15 14:00:54.521091] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:00.005 [2024-07-15 14:00:54.625057] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.005 [2024-07-15 14:00:54.625111] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.005 [2024-07-15 14:00:54.625140] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.005 [2024-07-15 14:00:54.625152] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.005 [2024-07-15 14:00:54.625161] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.005 [2024-07-15 14:00:54.625247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.005 [2024-07-15 14:00:54.625313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:00.005 [2024-07-15 14:00:54.625424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.005 [2024-07-15 14:00:54.625422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:00.005 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:00.005 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:22:00.005 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:00.005 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:00.005 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:00.005 14:00:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.005 14:00:54 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:00.005 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.005 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:00.005 [2024-07-15 14:00:54.783695] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.005 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.005 14:00:54 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:00.005 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.005 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:00.005 Malloc0 00:22:00.005 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.005 14:00:54 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:00.005 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.005 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:00.005 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.005 14:00:54 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:00.005 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.005 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:00.005 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.005 14:00:54 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:00.005 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.005 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:00.005 [2024-07-15 14:00:54.837225] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.005 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.005 14:00:54 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:00.005 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.005 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:00.262 [ 00:22:00.262 { 00:22:00.262 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:00.262 "subtype": "Discovery", 00:22:00.262 "listen_addresses": [], 00:22:00.262 "allow_any_host": true, 00:22:00.262 "hosts": [] 00:22:00.262 }, 00:22:00.262 { 00:22:00.262 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.262 "subtype": "NVMe", 00:22:00.262 "listen_addresses": [ 00:22:00.262 { 00:22:00.262 "trtype": "TCP", 00:22:00.262 "adrfam": "IPv4", 00:22:00.262 "traddr": "10.0.0.2", 00:22:00.262 "trsvcid": "4420" 00:22:00.262 } 00:22:00.262 ], 00:22:00.262 "allow_any_host": true, 00:22:00.262 "hosts": [], 00:22:00.262 "serial_number": "SPDK00000000000001", 00:22:00.262 "model_number": "SPDK bdev Controller", 00:22:00.262 "max_namespaces": 2, 00:22:00.262 "min_cntlid": 1, 00:22:00.262 "max_cntlid": 65519, 00:22:00.262 "namespaces": [ 00:22:00.262 { 00:22:00.262 "nsid": 1, 00:22:00.262 "bdev_name": "Malloc0", 00:22:00.262 "name": "Malloc0", 00:22:00.262 "nguid": "667C67B813AC41A2A8440B9B257C1E45", 00:22:00.262 "uuid": "667c67b8-13ac-41a2-a844-0b9b257c1e45" 00:22:00.262 } 00:22:00.262 ] 00:22:00.262 } 00:22:00.262 ] 00:22:00.262 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.262 14:00:54 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:00.262 14:00:54 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:00.262 14:00:54 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3805653 00:22:00.262 14:00:54 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:00.262 14:00:54 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:00.262 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:22:00.262 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:00.262 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:22:00.262 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:22:00.262 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:00.262 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.262 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:00.262 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:22:00.262 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:22:00.262 14:00:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:00.262 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:00.262 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:00.262 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:22:00.262 14:00:55 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:00.262 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.262 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:00.262 Malloc1 00:22:00.262 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.262 14:00:55 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:00.262 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.262 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:00.519 [ 00:22:00.519 { 00:22:00.519 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:00.519 "subtype": "Discovery", 00:22:00.519 "listen_addresses": [], 00:22:00.519 "allow_any_host": true, 00:22:00.519 "hosts": [] 00:22:00.519 }, 00:22:00.519 { 00:22:00.519 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.519 "subtype": "NVMe", 00:22:00.519 "listen_addresses": [ 00:22:00.519 { 00:22:00.519 "trtype": "TCP", 00:22:00.519 "adrfam": "IPv4", 00:22:00.519 "traddr": "10.0.0.2", 00:22:00.519 "trsvcid": "4420" 00:22:00.519 } 00:22:00.519 ], 00:22:00.519 "allow_any_host": true, 00:22:00.519 "hosts": [], 00:22:00.519 "serial_number": "SPDK00000000000001", 00:22:00.519 "model_number": "SPDK bdev Controller", 00:22:00.519 "max_namespaces": 2, 00:22:00.519 "min_cntlid": 1, 00:22:00.519 "max_cntlid": 65519, 00:22:00.519 "namespaces": [ 00:22:00.519 { 00:22:00.519 "nsid": 1, 00:22:00.519 "bdev_name": "Malloc0", 00:22:00.519 "name": "Malloc0", 00:22:00.519 "nguid": "667C67B813AC41A2A8440B9B257C1E45", 00:22:00.519 "uuid": "667c67b8-13ac-41a2-a844-0b9b257c1e45" 00:22:00.519 }, 00:22:00.519 { 00:22:00.519 "nsid": 2, 00:22:00.519 "bdev_name": "Malloc1", 00:22:00.519 "name": "Malloc1", 00:22:00.519 "nguid": "003673B305B846DA8AECE3012224BE58", 00:22:00.519 "uuid": "003673b3-05b8-46da-8aec-e3012224be58" 00:22:00.519 } 00:22:00.519 ] 00:22:00.519 } 00:22:00.519 ] 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3805653 00:22:00.519 Asynchronous Event Request test 00:22:00.519 Attaching to 10.0.0.2 00:22:00.519 Attached to 10.0.0.2 00:22:00.519 Registering asynchronous event callbacks... 00:22:00.519 Starting namespace attribute notice tests for all controllers... 00:22:00.519 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:00.519 aer_cb - Changed Namespace 00:22:00.519 Cleaning up... 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:00.519 rmmod nvme_tcp 00:22:00.519 rmmod nvme_fabrics 00:22:00.519 rmmod nvme_keyring 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3805623 ']' 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3805623 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 3805623 ']' 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 3805623 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3805623 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3805623' 00:22:00.519 killing process with pid 3805623 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 3805623 00:22:00.519 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 3805623 00:22:00.779 14:00:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:00.779 14:00:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:00.779 14:00:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:00.779 14:00:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:00.779 14:00:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:00.779 14:00:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.779 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:00.779 14:00:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.312 14:00:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:03.312 00:22:03.312 real 0m5.426s 00:22:03.312 user 0m4.128s 00:22:03.312 sys 0m1.963s 00:22:03.312 14:00:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:03.312 14:00:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:03.312 ************************************ 00:22:03.312 END TEST nvmf_aer 00:22:03.312 ************************************ 00:22:03.312 14:00:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:03.312 14:00:57 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:03.312 14:00:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:03.312 14:00:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:03.312 14:00:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:03.312 ************************************ 00:22:03.312 START TEST nvmf_async_init 00:22:03.312 ************************************ 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:03.312 * Looking for test storage... 00:22:03.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=5908856a3d894f748579ce2dcd082178 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:22:03.312 14:00:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:05.212 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:05.213 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:05.213 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:05.213 Found net devices under 0000:84:00.0: cvl_0_0 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:05.213 Found net devices under 0000:84:00.1: cvl_0_1 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:05.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:05.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:22:05.213 00:22:05.213 --- 10.0.0.2 ping statistics --- 00:22:05.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.213 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:05.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:05.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:22:05.213 00:22:05.213 --- 10.0.0.1 ping statistics --- 00:22:05.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.213 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3807724 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3807724 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 3807724 ']' 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:05.213 14:00:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.213 [2024-07-15 14:00:59.967699] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:22:05.213 [2024-07-15 14:00:59.967799] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.213 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.213 [2024-07-15 14:01:00.031460] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.471 [2024-07-15 14:01:00.145651] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.471 [2024-07-15 14:01:00.145714] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.471 [2024-07-15 14:01:00.145750] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.471 [2024-07-15 14:01:00.145762] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.471 [2024-07-15 14:01:00.145772] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.471 [2024-07-15 14:01:00.145803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.471 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:05.471 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:22:05.471 14:01:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:05.471 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:05.471 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.471 14:01:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.471 14:01:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:05.471 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.471 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.471 [2024-07-15 14:01:00.286828] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.471 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.471 14:01:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:05.471 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.471 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.471 null0 00:22:05.471 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.471 14:01:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:05.471 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.471 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.471 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.471 14:01:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:05.471 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.471 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.728 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.728 14:01:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 5908856a3d894f748579ce2dcd082178 00:22:05.728 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.728 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.728 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.728 14:01:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:05.728 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.728 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.728 [2024-07-15 14:01:00.327068] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.728 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.728 14:01:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:05.728 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.728 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.728 nvme0n1 00:22:05.728 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.728 14:01:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:05.728 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.728 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.728 [ 00:22:05.728 { 00:22:05.728 "name": "nvme0n1", 00:22:05.728 "aliases": [ 00:22:05.728 "5908856a-3d89-4f74-8579-ce2dcd082178" 00:22:05.728 ], 00:22:05.728 "product_name": "NVMe disk", 00:22:05.728 "block_size": 512, 00:22:05.728 "num_blocks": 2097152, 00:22:05.729 "uuid": "5908856a-3d89-4f74-8579-ce2dcd082178", 00:22:05.729 "assigned_rate_limits": { 00:22:05.729 "rw_ios_per_sec": 0, 00:22:05.987 "rw_mbytes_per_sec": 0, 00:22:05.987 "r_mbytes_per_sec": 0, 00:22:05.987 "w_mbytes_per_sec": 0 00:22:05.987 }, 00:22:05.987 "claimed": false, 00:22:05.987 "zoned": false, 00:22:05.987 "supported_io_types": { 00:22:05.987 "read": true, 00:22:05.987 "write": true, 00:22:05.987 "unmap": false, 00:22:05.987 "flush": true, 00:22:05.987 "reset": true, 00:22:05.987 "nvme_admin": true, 00:22:05.987 "nvme_io": true, 00:22:05.987 "nvme_io_md": false, 00:22:05.987 "write_zeroes": true, 00:22:05.987 "zcopy": false, 00:22:05.987 "get_zone_info": false, 00:22:05.987 "zone_management": false, 00:22:05.987 "zone_append": false, 00:22:05.987 "compare": true, 00:22:05.987 "compare_and_write": true, 00:22:05.987 "abort": true, 00:22:05.987 "seek_hole": false, 00:22:05.987 "seek_data": false, 00:22:05.987 "copy": true, 00:22:05.987 "nvme_iov_md": false 00:22:05.987 }, 00:22:05.987 "memory_domains": [ 00:22:05.987 { 00:22:05.987 "dma_device_id": "system", 00:22:05.987 "dma_device_type": 1 00:22:05.987 } 00:22:05.987 ], 00:22:05.987 "driver_specific": { 00:22:05.987 "nvme": [ 00:22:05.987 { 00:22:05.987 "trid": { 00:22:05.987 "trtype": "TCP", 00:22:05.987 "adrfam": "IPv4", 00:22:05.987 "traddr": "10.0.0.2", 00:22:05.987 "trsvcid": "4420", 00:22:05.987 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:05.987 }, 00:22:05.987 "ctrlr_data": { 00:22:05.987 "cntlid": 1, 00:22:05.987 "vendor_id": "0x8086", 00:22:05.987 "model_number": "SPDK bdev Controller", 00:22:05.987 "serial_number": "00000000000000000000", 00:22:05.987 "firmware_revision": "24.09", 00:22:05.987 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:05.987 "oacs": { 00:22:05.987 "security": 0, 00:22:05.987 "format": 0, 00:22:05.987 "firmware": 0, 00:22:05.987 "ns_manage": 0 00:22:05.987 }, 00:22:05.987 "multi_ctrlr": true, 00:22:05.987 "ana_reporting": false 00:22:05.987 }, 00:22:05.987 "vs": { 00:22:05.987 "nvme_version": "1.3" 00:22:05.987 }, 00:22:05.987 "ns_data": { 00:22:05.987 "id": 1, 00:22:05.987 "can_share": true 00:22:05.987 } 00:22:05.987 } 00:22:05.987 ], 00:22:05.987 "mp_policy": "active_passive" 00:22:05.987 } 00:22:05.987 } 00:22:05.987 ] 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.987 [2024-07-15 14:01:00.579652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:05.987 [2024-07-15 14:01:00.579735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f25c0 (9): Bad file descriptor 00:22:05.987 [2024-07-15 14:01:00.721872] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.987 [ 00:22:05.987 { 00:22:05.987 "name": "nvme0n1", 00:22:05.987 "aliases": [ 00:22:05.987 "5908856a-3d89-4f74-8579-ce2dcd082178" 00:22:05.987 ], 00:22:05.987 "product_name": "NVMe disk", 00:22:05.987 "block_size": 512, 00:22:05.987 "num_blocks": 2097152, 00:22:05.987 "uuid": "5908856a-3d89-4f74-8579-ce2dcd082178", 00:22:05.987 "assigned_rate_limits": { 00:22:05.987 "rw_ios_per_sec": 0, 00:22:05.987 "rw_mbytes_per_sec": 0, 00:22:05.987 "r_mbytes_per_sec": 0, 00:22:05.987 "w_mbytes_per_sec": 0 00:22:05.987 }, 00:22:05.987 "claimed": false, 00:22:05.987 "zoned": false, 00:22:05.987 "supported_io_types": { 00:22:05.987 "read": true, 00:22:05.987 "write": true, 00:22:05.987 "unmap": false, 00:22:05.987 "flush": true, 00:22:05.987 "reset": true, 00:22:05.987 "nvme_admin": true, 00:22:05.987 "nvme_io": true, 00:22:05.987 "nvme_io_md": false, 00:22:05.987 "write_zeroes": true, 00:22:05.987 "zcopy": false, 00:22:05.987 "get_zone_info": false, 00:22:05.987 "zone_management": false, 00:22:05.987 "zone_append": false, 00:22:05.987 "compare": true, 00:22:05.987 "compare_and_write": true, 00:22:05.987 "abort": true, 00:22:05.987 "seek_hole": false, 00:22:05.987 "seek_data": false, 00:22:05.987 "copy": true, 00:22:05.987 "nvme_iov_md": false 00:22:05.987 }, 00:22:05.987 "memory_domains": [ 00:22:05.987 { 00:22:05.987 "dma_device_id": "system", 00:22:05.987 "dma_device_type": 1 00:22:05.987 } 00:22:05.987 ], 00:22:05.987 "driver_specific": { 00:22:05.987 "nvme": [ 00:22:05.987 { 00:22:05.987 "trid": { 00:22:05.987 "trtype": "TCP", 00:22:05.987 "adrfam": "IPv4", 00:22:05.987 "traddr": "10.0.0.2", 00:22:05.987 "trsvcid": "4420", 00:22:05.987 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:05.987 }, 00:22:05.987 "ctrlr_data": { 00:22:05.987 "cntlid": 2, 00:22:05.987 "vendor_id": "0x8086", 00:22:05.987 "model_number": "SPDK bdev Controller", 00:22:05.987 "serial_number": "00000000000000000000", 00:22:05.987 "firmware_revision": "24.09", 00:22:05.987 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:05.987 "oacs": { 00:22:05.987 "security": 0, 00:22:05.987 "format": 0, 00:22:05.987 "firmware": 0, 00:22:05.987 "ns_manage": 0 00:22:05.987 }, 00:22:05.987 "multi_ctrlr": true, 00:22:05.987 "ana_reporting": false 00:22:05.987 }, 00:22:05.987 "vs": { 00:22:05.987 "nvme_version": "1.3" 00:22:05.987 }, 00:22:05.987 "ns_data": { 00:22:05.987 "id": 1, 00:22:05.987 "can_share": true 00:22:05.987 } 00:22:05.987 } 00:22:05.987 ], 00:22:05.987 "mp_policy": "active_passive" 00:22:05.987 } 00:22:05.987 } 00:22:05.987 ] 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.UPSG1h6Le7 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.UPSG1h6Le7 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.987 [2024-07-15 14:01:00.768272] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:05.987 [2024-07-15 14:01:00.768394] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UPSG1h6Le7 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.987 [2024-07-15 14:01:00.776290] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UPSG1h6Le7 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.987 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.987 [2024-07-15 14:01:00.784317] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:05.987 [2024-07-15 14:01:00.784378] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:06.246 nvme0n1 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:06.246 [ 00:22:06.246 { 00:22:06.246 "name": "nvme0n1", 00:22:06.246 "aliases": [ 00:22:06.246 "5908856a-3d89-4f74-8579-ce2dcd082178" 00:22:06.246 ], 00:22:06.246 "product_name": "NVMe disk", 00:22:06.246 "block_size": 512, 00:22:06.246 "num_blocks": 2097152, 00:22:06.246 "uuid": "5908856a-3d89-4f74-8579-ce2dcd082178", 00:22:06.246 "assigned_rate_limits": { 00:22:06.246 "rw_ios_per_sec": 0, 00:22:06.246 "rw_mbytes_per_sec": 0, 00:22:06.246 "r_mbytes_per_sec": 0, 00:22:06.246 "w_mbytes_per_sec": 0 00:22:06.246 }, 00:22:06.246 "claimed": false, 00:22:06.246 "zoned": false, 00:22:06.246 "supported_io_types": { 00:22:06.246 "read": true, 00:22:06.246 "write": true, 00:22:06.246 "unmap": false, 00:22:06.246 "flush": true, 00:22:06.246 "reset": true, 00:22:06.246 "nvme_admin": true, 00:22:06.246 "nvme_io": true, 00:22:06.246 "nvme_io_md": false, 00:22:06.246 "write_zeroes": true, 00:22:06.246 "zcopy": false, 00:22:06.246 "get_zone_info": false, 00:22:06.246 "zone_management": false, 00:22:06.246 "zone_append": false, 00:22:06.246 "compare": true, 00:22:06.246 "compare_and_write": true, 00:22:06.246 "abort": true, 00:22:06.246 "seek_hole": false, 00:22:06.246 "seek_data": false, 00:22:06.246 "copy": true, 00:22:06.246 "nvme_iov_md": false 00:22:06.246 }, 00:22:06.246 "memory_domains": [ 00:22:06.246 { 00:22:06.246 "dma_device_id": "system", 00:22:06.246 "dma_device_type": 1 00:22:06.246 } 00:22:06.246 ], 00:22:06.246 "driver_specific": { 00:22:06.246 "nvme": [ 00:22:06.246 { 00:22:06.246 "trid": { 00:22:06.246 "trtype": "TCP", 00:22:06.246 "adrfam": "IPv4", 00:22:06.246 "traddr": "10.0.0.2", 00:22:06.246 "trsvcid": "4421", 00:22:06.246 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:06.246 }, 00:22:06.246 "ctrlr_data": { 00:22:06.246 "cntlid": 3, 00:22:06.246 "vendor_id": "0x8086", 00:22:06.246 "model_number": "SPDK bdev Controller", 00:22:06.246 "serial_number": "00000000000000000000", 00:22:06.246 "firmware_revision": "24.09", 00:22:06.246 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:06.246 "oacs": { 00:22:06.246 "security": 0, 00:22:06.246 "format": 0, 00:22:06.246 "firmware": 0, 00:22:06.246 "ns_manage": 0 00:22:06.246 }, 00:22:06.246 "multi_ctrlr": true, 00:22:06.246 "ana_reporting": false 00:22:06.246 }, 00:22:06.246 "vs": { 00:22:06.246 "nvme_version": "1.3" 00:22:06.246 }, 00:22:06.246 "ns_data": { 00:22:06.246 "id": 1, 00:22:06.246 "can_share": true 00:22:06.246 } 00:22:06.246 } 00:22:06.246 ], 00:22:06.246 "mp_policy": "active_passive" 00:22:06.246 } 00:22:06.246 } 00:22:06.246 ] 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.UPSG1h6Le7 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:06.246 rmmod nvme_tcp 00:22:06.246 rmmod nvme_fabrics 00:22:06.246 rmmod nvme_keyring 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3807724 ']' 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3807724 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 3807724 ']' 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 3807724 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3807724 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3807724' 00:22:06.246 killing process with pid 3807724 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 3807724 00:22:06.246 [2024-07-15 14:01:00.951414] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:06.246 [2024-07-15 14:01:00.951450] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:06.246 14:01:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 3807724 00:22:06.506 14:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:06.506 14:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:06.506 14:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:06.506 14:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:06.506 14:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:06.506 14:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.506 14:01:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:06.506 14:01:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.406 14:01:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:08.406 00:22:08.406 real 0m5.622s 00:22:08.406 user 0m2.099s 00:22:08.406 sys 0m1.901s 00:22:08.406 14:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:08.406 14:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:08.406 ************************************ 00:22:08.406 END TEST nvmf_async_init 00:22:08.406 ************************************ 00:22:08.665 14:01:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:08.665 14:01:03 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:08.665 14:01:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:08.665 14:01:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:08.665 14:01:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:08.665 ************************************ 00:22:08.665 START TEST dma 00:22:08.665 ************************************ 00:22:08.665 14:01:03 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:08.665 * Looking for test storage... 00:22:08.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:08.665 14:01:03 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:08.665 14:01:03 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:22:08.665 14:01:03 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:08.665 14:01:03 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:08.665 14:01:03 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:08.665 14:01:03 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:08.665 14:01:03 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:08.665 14:01:03 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:08.665 14:01:03 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:08.665 14:01:03 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:08.665 14:01:03 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:08.665 14:01:03 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:08.665 14:01:03 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:08.665 14:01:03 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:08.665 14:01:03 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:08.665 14:01:03 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:08.665 14:01:03 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:08.665 14:01:03 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:08.665 14:01:03 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:08.665 14:01:03 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:08.665 14:01:03 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:08.665 14:01:03 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:08.665 14:01:03 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.666 14:01:03 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.666 14:01:03 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.666 14:01:03 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:22:08.666 14:01:03 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.666 14:01:03 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:22:08.666 14:01:03 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:08.666 14:01:03 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:08.666 14:01:03 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:08.666 14:01:03 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:08.666 14:01:03 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:08.666 14:01:03 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:08.666 14:01:03 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:08.666 14:01:03 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:08.666 14:01:03 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:08.666 14:01:03 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:22:08.666 00:22:08.666 real 0m0.074s 00:22:08.666 user 0m0.035s 00:22:08.666 sys 0m0.045s 00:22:08.666 14:01:03 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:08.666 14:01:03 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:22:08.666 ************************************ 00:22:08.666 END TEST dma 00:22:08.666 ************************************ 00:22:08.666 14:01:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:08.666 14:01:03 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:08.666 14:01:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:08.666 14:01:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:08.666 14:01:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:08.666 ************************************ 00:22:08.666 START TEST nvmf_identify 00:22:08.666 ************************************ 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:08.666 * Looking for test storage... 00:22:08.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:22:08.666 14:01:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:11.195 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:11.195 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:11.195 Found net devices under 0000:84:00.0: cvl_0_0 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.195 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:11.196 Found net devices under 0000:84:00.1: cvl_0_1 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:11.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:22:11.196 00:22:11.196 --- 10.0.0.2 ping statistics --- 00:22:11.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.196 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:11.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:22:11.196 00:22:11.196 --- 10.0.0.1 ping statistics --- 00:22:11.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.196 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3809867 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3809867 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 3809867 ']' 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:11.196 14:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:11.196 [2024-07-15 14:01:05.710177] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:22:11.196 [2024-07-15 14:01:05.710263] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.196 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.196 [2024-07-15 14:01:05.776537] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:11.196 [2024-07-15 14:01:05.885853] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.196 [2024-07-15 14:01:05.885906] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.196 [2024-07-15 14:01:05.885935] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.196 [2024-07-15 14:01:05.885947] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.196 [2024-07-15 14:01:05.885957] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.196 [2024-07-15 14:01:05.886026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.196 [2024-07-15 14:01:05.886111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:11.196 [2024-07-15 14:01:05.886178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:11.196 [2024-07-15 14:01:05.886180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.130 [2024-07-15 14:01:06.684819] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.130 Malloc0 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.130 [2024-07-15 14:01:06.758291] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.130 [ 00:22:12.130 { 00:22:12.130 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:12.130 "subtype": "Discovery", 00:22:12.130 "listen_addresses": [ 00:22:12.130 { 00:22:12.130 "trtype": "TCP", 00:22:12.130 "adrfam": "IPv4", 00:22:12.130 "traddr": "10.0.0.2", 00:22:12.130 "trsvcid": "4420" 00:22:12.130 } 00:22:12.130 ], 00:22:12.130 "allow_any_host": true, 00:22:12.130 "hosts": [] 00:22:12.130 }, 00:22:12.130 { 00:22:12.130 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.130 "subtype": "NVMe", 00:22:12.130 "listen_addresses": [ 00:22:12.130 { 00:22:12.130 "trtype": "TCP", 00:22:12.130 "adrfam": "IPv4", 00:22:12.130 "traddr": "10.0.0.2", 00:22:12.130 "trsvcid": "4420" 00:22:12.130 } 00:22:12.130 ], 00:22:12.130 "allow_any_host": true, 00:22:12.130 "hosts": [], 00:22:12.130 "serial_number": "SPDK00000000000001", 00:22:12.130 "model_number": "SPDK bdev Controller", 00:22:12.130 "max_namespaces": 32, 00:22:12.130 "min_cntlid": 1, 00:22:12.130 "max_cntlid": 65519, 00:22:12.130 "namespaces": [ 00:22:12.130 { 00:22:12.130 "nsid": 1, 00:22:12.130 "bdev_name": "Malloc0", 00:22:12.130 "name": "Malloc0", 00:22:12.130 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:12.130 "eui64": "ABCDEF0123456789", 00:22:12.130 "uuid": "90237c9d-64aa-4b6c-bffd-9d11d613496f" 00:22:12.130 } 00:22:12.130 ] 00:22:12.130 } 00:22:12.130 ] 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.130 14:01:06 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:12.130 [2024-07-15 14:01:06.799145] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:22:12.130 [2024-07-15 14:01:06.799191] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3810018 ] 00:22:12.130 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.130 [2024-07-15 14:01:06.833191] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:12.130 [2024-07-15 14:01:06.833254] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:12.130 [2024-07-15 14:01:06.833264] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:12.130 [2024-07-15 14:01:06.833280] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:12.130 [2024-07-15 14:01:06.833290] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:12.130 [2024-07-15 14:01:06.833692] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:12.130 [2024-07-15 14:01:06.833764] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x10b9540 0 00:22:12.130 [2024-07-15 14:01:06.839768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:12.130 [2024-07-15 14:01:06.839787] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:12.130 [2024-07-15 14:01:06.839795] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:12.130 [2024-07-15 14:01:06.839801] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:12.130 [2024-07-15 14:01:06.839854] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.130 [2024-07-15 14:01:06.839867] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.130 [2024-07-15 14:01:06.839875] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b9540) 00:22:12.130 [2024-07-15 14:01:06.839891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:12.130 [2024-07-15 14:01:06.839926] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11193c0, cid 0, qid 0 00:22:12.130 [2024-07-15 14:01:06.847755] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.130 [2024-07-15 14:01:06.847773] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.130 [2024-07-15 14:01:06.847780] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.130 [2024-07-15 14:01:06.847787] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11193c0) on tqpair=0x10b9540 00:22:12.130 [2024-07-15 14:01:06.847807] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:12.130 [2024-07-15 14:01:06.847818] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:12.130 [2024-07-15 14:01:06.847831] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:12.130 [2024-07-15 14:01:06.847852] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.130 [2024-07-15 14:01:06.847861] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.130 [2024-07-15 14:01:06.847867] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b9540) 00:22:12.130 [2024-07-15 14:01:06.847878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.130 [2024-07-15 14:01:06.847901] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11193c0, cid 0, qid 0 00:22:12.130 [2024-07-15 14:01:06.848123] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.130 [2024-07-15 14:01:06.848138] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.130 [2024-07-15 14:01:06.848145] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.130 [2024-07-15 14:01:06.848151] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11193c0) on tqpair=0x10b9540 00:22:12.131 [2024-07-15 14:01:06.848159] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:12.131 [2024-07-15 14:01:06.848172] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:12.131 [2024-07-15 14:01:06.848184] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.848191] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.848197] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b9540) 00:22:12.131 [2024-07-15 14:01:06.848207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.131 [2024-07-15 14:01:06.848227] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11193c0, cid 0, qid 0 00:22:12.131 [2024-07-15 14:01:06.848418] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.131 [2024-07-15 14:01:06.848432] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.131 [2024-07-15 14:01:06.848439] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.848445] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11193c0) on tqpair=0x10b9540 00:22:12.131 [2024-07-15 14:01:06.848453] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:12.131 [2024-07-15 14:01:06.848467] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:12.131 [2024-07-15 14:01:06.848478] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.848485] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.848491] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b9540) 00:22:12.131 [2024-07-15 14:01:06.848501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.131 [2024-07-15 14:01:06.848521] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11193c0, cid 0, qid 0 00:22:12.131 [2024-07-15 14:01:06.848668] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.131 [2024-07-15 14:01:06.848682] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.131 [2024-07-15 14:01:06.848689] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.848695] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11193c0) on tqpair=0x10b9540 00:22:12.131 [2024-07-15 14:01:06.848703] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:12.131 [2024-07-15 14:01:06.848735] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.848757] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.848764] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b9540) 00:22:12.131 [2024-07-15 14:01:06.848775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.131 [2024-07-15 14:01:06.848812] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11193c0, cid 0, qid 0 00:22:12.131 [2024-07-15 14:01:06.848919] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.131 [2024-07-15 14:01:06.848934] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.131 [2024-07-15 14:01:06.848941] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.848947] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11193c0) on tqpair=0x10b9540 00:22:12.131 [2024-07-15 14:01:06.848955] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:12.131 [2024-07-15 14:01:06.848964] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:12.131 [2024-07-15 14:01:06.848978] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:12.131 [2024-07-15 14:01:06.849088] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:12.131 [2024-07-15 14:01:06.849097] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:12.131 [2024-07-15 14:01:06.849125] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.849132] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.849138] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b9540) 00:22:12.131 [2024-07-15 14:01:06.849148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.131 [2024-07-15 14:01:06.849168] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11193c0, cid 0, qid 0 00:22:12.131 [2024-07-15 14:01:06.849359] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.131 [2024-07-15 14:01:06.849372] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.131 [2024-07-15 14:01:06.849379] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.849385] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11193c0) on tqpair=0x10b9540 00:22:12.131 [2024-07-15 14:01:06.849393] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:12.131 [2024-07-15 14:01:06.849409] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.849417] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.849423] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b9540) 00:22:12.131 [2024-07-15 14:01:06.849433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.131 [2024-07-15 14:01:06.849452] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11193c0, cid 0, qid 0 00:22:12.131 [2024-07-15 14:01:06.849561] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.131 [2024-07-15 14:01:06.849575] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.131 [2024-07-15 14:01:06.849581] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.849587] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11193c0) on tqpair=0x10b9540 00:22:12.131 [2024-07-15 14:01:06.849594] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:12.131 [2024-07-15 14:01:06.849606] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:12.131 [2024-07-15 14:01:06.849620] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:12.131 [2024-07-15 14:01:06.849638] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:12.131 [2024-07-15 14:01:06.849653] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.849660] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b9540) 00:22:12.131 [2024-07-15 14:01:06.849670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.131 [2024-07-15 14:01:06.849690] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11193c0, cid 0, qid 0 00:22:12.131 [2024-07-15 14:01:06.849891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.131 [2024-07-15 14:01:06.849907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.131 [2024-07-15 14:01:06.849914] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.849920] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10b9540): datao=0, datal=4096, cccid=0 00:22:12.131 [2024-07-15 14:01:06.849928] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11193c0) on tqpair(0x10b9540): expected_datao=0, payload_size=4096 00:22:12.131 [2024-07-15 14:01:06.849935] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.849945] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.849953] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.850040] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.131 [2024-07-15 14:01:06.850051] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.131 [2024-07-15 14:01:06.850058] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.850064] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11193c0) on tqpair=0x10b9540 00:22:12.131 [2024-07-15 14:01:06.850076] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:12.131 [2024-07-15 14:01:06.850089] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:12.131 [2024-07-15 14:01:06.850096] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:12.131 [2024-07-15 14:01:06.850104] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:12.131 [2024-07-15 14:01:06.850112] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:12.131 [2024-07-15 14:01:06.850119] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:12.131 [2024-07-15 14:01:06.850134] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:12.131 [2024-07-15 14:01:06.850145] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.850152] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.850158] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b9540) 00:22:12.131 [2024-07-15 14:01:06.850168] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:12.131 [2024-07-15 14:01:06.850189] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11193c0, cid 0, qid 0 00:22:12.131 [2024-07-15 14:01:06.850408] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.131 [2024-07-15 14:01:06.850421] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.131 [2024-07-15 14:01:06.850427] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.850433] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11193c0) on tqpair=0x10b9540 00:22:12.131 [2024-07-15 14:01:06.850444] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.850451] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.850457] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10b9540) 00:22:12.131 [2024-07-15 14:01:06.850466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.131 [2024-07-15 14:01:06.850475] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.850482] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.850488] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x10b9540) 00:22:12.131 [2024-07-15 14:01:06.850496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.131 [2024-07-15 14:01:06.850505] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.850511] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.131 [2024-07-15 14:01:06.850517] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x10b9540) 00:22:12.132 [2024-07-15 14:01:06.850525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.132 [2024-07-15 14:01:06.850533] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.132 [2024-07-15 14:01:06.850540] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.132 [2024-07-15 14:01:06.850545] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9540) 00:22:12.132 [2024-07-15 14:01:06.850553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.132 [2024-07-15 14:01:06.850562] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:12.132 [2024-07-15 14:01:06.850580] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:12.132 [2024-07-15 14:01:06.850592] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.132 [2024-07-15 14:01:06.850598] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10b9540) 00:22:12.132 [2024-07-15 14:01:06.850608] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.132 [2024-07-15 14:01:06.850629] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11193c0, cid 0, qid 0 00:22:12.132 [2024-07-15 14:01:06.850639] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1119540, cid 1, qid 0 00:22:12.132 [2024-07-15 14:01:06.850647] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11196c0, cid 2, qid 0 00:22:12.132 [2024-07-15 14:01:06.850654] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1119840, cid 3, qid 0 00:22:12.132 [2024-07-15 14:01:06.850662] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11199c0, cid 4, qid 0 00:22:12.132 [2024-07-15 14:01:06.850871] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.132 [2024-07-15 14:01:06.850887] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.132 [2024-07-15 14:01:06.850894] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.132 [2024-07-15 14:01:06.850900] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11199c0) on tqpair=0x10b9540 00:22:12.132 [2024-07-15 14:01:06.850912] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:12.132 [2024-07-15 14:01:06.850922] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:12.132 [2024-07-15 14:01:06.850940] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.132 [2024-07-15 14:01:06.850949] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10b9540) 00:22:12.132 [2024-07-15 14:01:06.850959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.132 [2024-07-15 14:01:06.850980] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11199c0, cid 4, qid 0 00:22:12.132 [2024-07-15 14:01:06.851181] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.132 [2024-07-15 14:01:06.851196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.132 [2024-07-15 14:01:06.851203] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.132 [2024-07-15 14:01:06.851209] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10b9540): datao=0, datal=4096, cccid=4 00:22:12.132 [2024-07-15 14:01:06.851216] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11199c0) on tqpair(0x10b9540): expected_datao=0, payload_size=4096 00:22:12.132 [2024-07-15 14:01:06.851223] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.132 [2024-07-15 14:01:06.851232] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.132 [2024-07-15 14:01:06.851239] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.132 [2024-07-15 14:01:06.851270] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.132 [2024-07-15 14:01:06.851282] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.132 [2024-07-15 14:01:06.851289] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.132 [2024-07-15 14:01:06.851295] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11199c0) on tqpair=0x10b9540 00:22:12.132 [2024-07-15 14:01:06.851312] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:12.132 [2024-07-15 14:01:06.851348] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.132 [2024-07-15 14:01:06.851358] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10b9540) 00:22:12.132 [2024-07-15 14:01:06.851368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.132 [2024-07-15 14:01:06.851379] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.132 [2024-07-15 14:01:06.851386] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.132 [2024-07-15 14:01:06.851391] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10b9540) 00:22:12.132 [2024-07-15 14:01:06.851400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.132 [2024-07-15 14:01:06.851425] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11199c0, cid 4, qid 0 00:22:12.132 [2024-07-15 14:01:06.851436] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1119b40, cid 5, qid 0 00:22:12.132 [2024-07-15 14:01:06.851674] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.132 [2024-07-15 14:01:06.851688] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.132 [2024-07-15 14:01:06.851694] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.132 [2024-07-15 14:01:06.851700] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10b9540): datao=0, datal=1024, cccid=4 00:22:12.132 [2024-07-15 14:01:06.851707] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11199c0) on tqpair(0x10b9540): expected_datao=0, payload_size=1024 00:22:12.132 [2024-07-15 14:01:06.851729] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.132 [2024-07-15 14:01:06.855748] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.132 [2024-07-15 14:01:06.855775] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.132 [2024-07-15 14:01:06.855786] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.132 [2024-07-15 14:01:06.855796] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.132 [2024-07-15 14:01:06.855802] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.132 [2024-07-15 14:01:06.855809] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1119b40) on tqpair=0x10b9540 00:22:12.132 [2024-07-15 14:01:06.891992] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.132 [2024-07-15 14:01:06.892010] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.132 [2024-07-15 14:01:06.892031] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.132 [2024-07-15 14:01:06.892038] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11199c0) on tqpair=0x10b9540 00:22:12.132 [2024-07-15 14:01:06.892056] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.132 [2024-07-15 14:01:06.892064] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10b9540) 00:22:12.132 [2024-07-15 14:01:06.892075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.132 [2024-07-15 14:01:06.892104] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11199c0, cid 4, qid 0 00:22:12.132 [2024-07-15 14:01:06.892261] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.132 [2024-07-15 14:01:06.892275] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.132 [2024-07-15 14:01:06.892282] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.132 [2024-07-15 14:01:06.892288] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10b9540): datao=0, datal=3072, cccid=4 00:22:12.132 [2024-07-15 14:01:06.892295] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11199c0) on tqpair(0x10b9540): expected_datao=0, payload_size=3072 00:22:12.132 [2024-07-15 14:01:06.892302] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.132 [2024-07-15 14:01:06.892341] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.132 [2024-07-15 14:01:06.892350] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.132 [2024-07-15 14:01:06.932899] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.132 [2024-07-15 14:01:06.932917] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.132 [2024-07-15 14:01:06.932925] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.132 [2024-07-15 14:01:06.932931] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11199c0) on tqpair=0x10b9540 00:22:12.132 [2024-07-15 14:01:06.932947] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.132 [2024-07-15 14:01:06.932956] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10b9540) 00:22:12.132 [2024-07-15 14:01:06.932967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.132 [2024-07-15 14:01:06.932997] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11199c0, cid 4, qid 0 00:22:12.132 [2024-07-15 14:01:06.933158] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.132 [2024-07-15 14:01:06.933173] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.132 [2024-07-15 14:01:06.933179] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.132 [2024-07-15 14:01:06.933186] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10b9540): datao=0, datal=8, cccid=4 00:22:12.132 [2024-07-15 14:01:06.933193] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11199c0) on tqpair(0x10b9540): expected_datao=0, payload_size=8 00:22:12.132 [2024-07-15 14:01:06.933200] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.132 [2024-07-15 14:01:06.933209] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.132 [2024-07-15 14:01:06.933216] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.393 [2024-07-15 14:01:06.973889] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.393 [2024-07-15 14:01:06.973909] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.393 [2024-07-15 14:01:06.973916] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.393 [2024-07-15 14:01:06.973923] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11199c0) on tqpair=0x10b9540 00:22:12.393 ===================================================== 00:22:12.393 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:12.393 ===================================================== 00:22:12.393 Controller Capabilities/Features 00:22:12.393 ================================ 00:22:12.393 Vendor ID: 0000 00:22:12.393 Subsystem Vendor ID: 0000 00:22:12.393 Serial Number: .................... 00:22:12.393 Model Number: ........................................ 00:22:12.393 Firmware Version: 24.09 00:22:12.393 Recommended Arb Burst: 0 00:22:12.393 IEEE OUI Identifier: 00 00 00 00:22:12.393 Multi-path I/O 00:22:12.393 May have multiple subsystem ports: No 00:22:12.393 May have multiple controllers: No 00:22:12.393 Associated with SR-IOV VF: No 00:22:12.393 Max Data Transfer Size: 131072 00:22:12.393 Max Number of Namespaces: 0 00:22:12.393 Max Number of I/O Queues: 1024 00:22:12.393 NVMe Specification Version (VS): 1.3 00:22:12.393 NVMe Specification Version (Identify): 1.3 00:22:12.393 Maximum Queue Entries: 128 00:22:12.393 Contiguous Queues Required: Yes 00:22:12.393 Arbitration Mechanisms Supported 00:22:12.393 Weighted Round Robin: Not Supported 00:22:12.393 Vendor Specific: Not Supported 00:22:12.393 Reset Timeout: 15000 ms 00:22:12.393 Doorbell Stride: 4 bytes 00:22:12.393 NVM Subsystem Reset: Not Supported 00:22:12.393 Command Sets Supported 00:22:12.393 NVM Command Set: Supported 00:22:12.393 Boot Partition: Not Supported 00:22:12.393 Memory Page Size Minimum: 4096 bytes 00:22:12.393 Memory Page Size Maximum: 4096 bytes 00:22:12.393 Persistent Memory Region: Not Supported 00:22:12.393 Optional Asynchronous Events Supported 00:22:12.393 Namespace Attribute Notices: Not Supported 00:22:12.393 Firmware Activation Notices: Not Supported 00:22:12.393 ANA Change Notices: Not Supported 00:22:12.393 PLE Aggregate Log Change Notices: Not Supported 00:22:12.393 LBA Status Info Alert Notices: Not Supported 00:22:12.393 EGE Aggregate Log Change Notices: Not Supported 00:22:12.393 Normal NVM Subsystem Shutdown event: Not Supported 00:22:12.393 Zone Descriptor Change Notices: Not Supported 00:22:12.393 Discovery Log Change Notices: Supported 00:22:12.393 Controller Attributes 00:22:12.393 128-bit Host Identifier: Not Supported 00:22:12.393 Non-Operational Permissive Mode: Not Supported 00:22:12.393 NVM Sets: Not Supported 00:22:12.393 Read Recovery Levels: Not Supported 00:22:12.393 Endurance Groups: Not Supported 00:22:12.393 Predictable Latency Mode: Not Supported 00:22:12.393 Traffic Based Keep ALive: Not Supported 00:22:12.393 Namespace Granularity: Not Supported 00:22:12.393 SQ Associations: Not Supported 00:22:12.393 UUID List: Not Supported 00:22:12.393 Multi-Domain Subsystem: Not Supported 00:22:12.393 Fixed Capacity Management: Not Supported 00:22:12.393 Variable Capacity Management: Not Supported 00:22:12.393 Delete Endurance Group: Not Supported 00:22:12.393 Delete NVM Set: Not Supported 00:22:12.393 Extended LBA Formats Supported: Not Supported 00:22:12.393 Flexible Data Placement Supported: Not Supported 00:22:12.393 00:22:12.393 Controller Memory Buffer Support 00:22:12.393 ================================ 00:22:12.393 Supported: No 00:22:12.393 00:22:12.393 Persistent Memory Region Support 00:22:12.393 ================================ 00:22:12.393 Supported: No 00:22:12.393 00:22:12.393 Admin Command Set Attributes 00:22:12.393 ============================ 00:22:12.393 Security Send/Receive: Not Supported 00:22:12.393 Format NVM: Not Supported 00:22:12.393 Firmware Activate/Download: Not Supported 00:22:12.393 Namespace Management: Not Supported 00:22:12.393 Device Self-Test: Not Supported 00:22:12.393 Directives: Not Supported 00:22:12.393 NVMe-MI: Not Supported 00:22:12.393 Virtualization Management: Not Supported 00:22:12.393 Doorbell Buffer Config: Not Supported 00:22:12.393 Get LBA Status Capability: Not Supported 00:22:12.393 Command & Feature Lockdown Capability: Not Supported 00:22:12.393 Abort Command Limit: 1 00:22:12.393 Async Event Request Limit: 4 00:22:12.393 Number of Firmware Slots: N/A 00:22:12.393 Firmware Slot 1 Read-Only: N/A 00:22:12.393 Firmware Activation Without Reset: N/A 00:22:12.393 Multiple Update Detection Support: N/A 00:22:12.393 Firmware Update Granularity: No Information Provided 00:22:12.393 Per-Namespace SMART Log: No 00:22:12.393 Asymmetric Namespace Access Log Page: Not Supported 00:22:12.393 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:12.393 Command Effects Log Page: Not Supported 00:22:12.393 Get Log Page Extended Data: Supported 00:22:12.393 Telemetry Log Pages: Not Supported 00:22:12.393 Persistent Event Log Pages: Not Supported 00:22:12.393 Supported Log Pages Log Page: May Support 00:22:12.393 Commands Supported & Effects Log Page: Not Supported 00:22:12.393 Feature Identifiers & Effects Log Page:May Support 00:22:12.393 NVMe-MI Commands & Effects Log Page: May Support 00:22:12.393 Data Area 4 for Telemetry Log: Not Supported 00:22:12.393 Error Log Page Entries Supported: 128 00:22:12.393 Keep Alive: Not Supported 00:22:12.393 00:22:12.393 NVM Command Set Attributes 00:22:12.393 ========================== 00:22:12.393 Submission Queue Entry Size 00:22:12.393 Max: 1 00:22:12.393 Min: 1 00:22:12.393 Completion Queue Entry Size 00:22:12.393 Max: 1 00:22:12.393 Min: 1 00:22:12.393 Number of Namespaces: 0 00:22:12.393 Compare Command: Not Supported 00:22:12.393 Write Uncorrectable Command: Not Supported 00:22:12.393 Dataset Management Command: Not Supported 00:22:12.393 Write Zeroes Command: Not Supported 00:22:12.393 Set Features Save Field: Not Supported 00:22:12.393 Reservations: Not Supported 00:22:12.394 Timestamp: Not Supported 00:22:12.394 Copy: Not Supported 00:22:12.394 Volatile Write Cache: Not Present 00:22:12.394 Atomic Write Unit (Normal): 1 00:22:12.394 Atomic Write Unit (PFail): 1 00:22:12.394 Atomic Compare & Write Unit: 1 00:22:12.394 Fused Compare & Write: Supported 00:22:12.394 Scatter-Gather List 00:22:12.394 SGL Command Set: Supported 00:22:12.394 SGL Keyed: Supported 00:22:12.394 SGL Bit Bucket Descriptor: Not Supported 00:22:12.394 SGL Metadata Pointer: Not Supported 00:22:12.394 Oversized SGL: Not Supported 00:22:12.394 SGL Metadata Address: Not Supported 00:22:12.394 SGL Offset: Supported 00:22:12.394 Transport SGL Data Block: Not Supported 00:22:12.394 Replay Protected Memory Block: Not Supported 00:22:12.394 00:22:12.394 Firmware Slot Information 00:22:12.394 ========================= 00:22:12.394 Active slot: 0 00:22:12.394 00:22:12.394 00:22:12.394 Error Log 00:22:12.394 ========= 00:22:12.394 00:22:12.394 Active Namespaces 00:22:12.394 ================= 00:22:12.394 Discovery Log Page 00:22:12.394 ================== 00:22:12.394 Generation Counter: 2 00:22:12.394 Number of Records: 2 00:22:12.394 Record Format: 0 00:22:12.394 00:22:12.394 Discovery Log Entry 0 00:22:12.394 ---------------------- 00:22:12.394 Transport Type: 3 (TCP) 00:22:12.394 Address Family: 1 (IPv4) 00:22:12.394 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:12.394 Entry Flags: 00:22:12.394 Duplicate Returned Information: 1 00:22:12.394 Explicit Persistent Connection Support for Discovery: 1 00:22:12.394 Transport Requirements: 00:22:12.394 Secure Channel: Not Required 00:22:12.394 Port ID: 0 (0x0000) 00:22:12.394 Controller ID: 65535 (0xffff) 00:22:12.394 Admin Max SQ Size: 128 00:22:12.394 Transport Service Identifier: 4420 00:22:12.394 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:12.394 Transport Address: 10.0.0.2 00:22:12.394 Discovery Log Entry 1 00:22:12.394 ---------------------- 00:22:12.394 Transport Type: 3 (TCP) 00:22:12.394 Address Family: 1 (IPv4) 00:22:12.394 Subsystem Type: 2 (NVM Subsystem) 00:22:12.394 Entry Flags: 00:22:12.394 Duplicate Returned Information: 0 00:22:12.394 Explicit Persistent Connection Support for Discovery: 0 00:22:12.394 Transport Requirements: 00:22:12.394 Secure Channel: Not Required 00:22:12.394 Port ID: 0 (0x0000) 00:22:12.394 Controller ID: 65535 (0xffff) 00:22:12.394 Admin Max SQ Size: 128 00:22:12.394 Transport Service Identifier: 4420 00:22:12.394 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:12.394 Transport Address: 10.0.0.2 [2024-07-15 14:01:06.974035] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:12.394 [2024-07-15 14:01:06.974061] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11193c0) on tqpair=0x10b9540 00:22:12.394 [2024-07-15 14:01:06.974072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.394 [2024-07-15 14:01:06.974081] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1119540) on tqpair=0x10b9540 00:22:12.394 [2024-07-15 14:01:06.974103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.394 [2024-07-15 14:01:06.974111] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11196c0) on tqpair=0x10b9540 00:22:12.394 [2024-07-15 14:01:06.974118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.394 [2024-07-15 14:01:06.974126] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1119840) on tqpair=0x10b9540 00:22:12.394 [2024-07-15 14:01:06.974132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.394 [2024-07-15 14:01:06.974149] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.394 [2024-07-15 14:01:06.974173] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.394 [2024-07-15 14:01:06.974180] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9540) 00:22:12.394 [2024-07-15 14:01:06.974191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.394 [2024-07-15 14:01:06.974215] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1119840, cid 3, qid 0 00:22:12.394 [2024-07-15 14:01:06.974388] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.394 [2024-07-15 14:01:06.974404] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.394 [2024-07-15 14:01:06.974411] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.394 [2024-07-15 14:01:06.974418] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1119840) on tqpair=0x10b9540 00:22:12.394 [2024-07-15 14:01:06.974430] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.394 [2024-07-15 14:01:06.974438] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.394 [2024-07-15 14:01:06.974445] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9540) 00:22:12.394 [2024-07-15 14:01:06.974455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.394 [2024-07-15 14:01:06.974484] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1119840, cid 3, qid 0 00:22:12.394 [2024-07-15 14:01:06.974678] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.394 [2024-07-15 14:01:06.974693] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.394 [2024-07-15 14:01:06.974699] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.394 [2024-07-15 14:01:06.974706] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1119840) on tqpair=0x10b9540 00:22:12.394 [2024-07-15 14:01:06.974713] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:12.394 [2024-07-15 14:01:06.974721] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:12.394 [2024-07-15 14:01:06.974760] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.394 [2024-07-15 14:01:06.974775] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.394 [2024-07-15 14:01:06.974782] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9540) 00:22:12.394 [2024-07-15 14:01:06.974792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.394 [2024-07-15 14:01:06.974814] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1119840, cid 3, qid 0 00:22:12.394 [2024-07-15 14:01:06.974972] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.394 [2024-07-15 14:01:06.974987] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.394 [2024-07-15 14:01:06.974994] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.394 [2024-07-15 14:01:06.975001] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1119840) on tqpair=0x10b9540 00:22:12.394 [2024-07-15 14:01:06.975019] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.394 [2024-07-15 14:01:06.975028] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.394 [2024-07-15 14:01:06.975035] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9540) 00:22:12.394 [2024-07-15 14:01:06.975045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.394 [2024-07-15 14:01:06.975066] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1119840, cid 3, qid 0 00:22:12.394 [2024-07-15 14:01:06.975158] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.394 [2024-07-15 14:01:06.975173] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.394 [2024-07-15 14:01:06.975179] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.394 [2024-07-15 14:01:06.975186] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1119840) on tqpair=0x10b9540 00:22:12.394 [2024-07-15 14:01:06.975203] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.394 [2024-07-15 14:01:06.975212] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.394 [2024-07-15 14:01:06.975218] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9540) 00:22:12.394 [2024-07-15 14:01:06.975229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.394 [2024-07-15 14:01:06.975250] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1119840, cid 3, qid 0 00:22:12.394 [2024-07-15 14:01:06.975352] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.394 [2024-07-15 14:01:06.975365] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.394 [2024-07-15 14:01:06.975371] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.394 [2024-07-15 14:01:06.975378] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1119840) on tqpair=0x10b9540 00:22:12.394 [2024-07-15 14:01:06.975395] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.394 [2024-07-15 14:01:06.975404] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.394 [2024-07-15 14:01:06.975410] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9540) 00:22:12.394 [2024-07-15 14:01:06.975420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.394 [2024-07-15 14:01:06.975441] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1119840, cid 3, qid 0 00:22:12.394 [2024-07-15 14:01:06.975533] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.394 [2024-07-15 14:01:06.975548] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.394 [2024-07-15 14:01:06.975555] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.394 [2024-07-15 14:01:06.975561] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1119840) on tqpair=0x10b9540 00:22:12.394 [2024-07-15 14:01:06.975578] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.394 [2024-07-15 14:01:06.975587] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.394 [2024-07-15 14:01:06.975597] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9540) 00:22:12.394 [2024-07-15 14:01:06.975608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.394 [2024-07-15 14:01:06.975629] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1119840, cid 3, qid 0 00:22:12.394 [2024-07-15 14:01:06.975727] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.394 [2024-07-15 14:01:06.975750] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.395 [2024-07-15 14:01:06.975758] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:06.975764] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1119840) on tqpair=0x10b9540 00:22:12.395 [2024-07-15 14:01:06.975782] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:06.975791] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:06.975797] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9540) 00:22:12.395 [2024-07-15 14:01:06.975808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.395 [2024-07-15 14:01:06.975829] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1119840, cid 3, qid 0 00:22:12.395 [2024-07-15 14:01:06.975924] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.395 [2024-07-15 14:01:06.975939] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.395 [2024-07-15 14:01:06.975945] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:06.975952] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1119840) on tqpair=0x10b9540 00:22:12.395 [2024-07-15 14:01:06.975969] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:06.975978] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:06.975984] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9540) 00:22:12.395 [2024-07-15 14:01:06.975995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.395 [2024-07-15 14:01:06.976030] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1119840, cid 3, qid 0 00:22:12.395 [2024-07-15 14:01:06.976140] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.395 [2024-07-15 14:01:06.976154] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.395 [2024-07-15 14:01:06.976161] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:06.976167] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1119840) on tqpair=0x10b9540 00:22:12.395 [2024-07-15 14:01:06.976183] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:06.976191] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:06.976197] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9540) 00:22:12.395 [2024-07-15 14:01:06.976207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.395 [2024-07-15 14:01:06.976227] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1119840, cid 3, qid 0 00:22:12.395 [2024-07-15 14:01:06.976313] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.395 [2024-07-15 14:01:06.976327] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.395 [2024-07-15 14:01:06.976333] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:06.976339] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1119840) on tqpair=0x10b9540 00:22:12.395 [2024-07-15 14:01:06.976355] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:06.976364] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:06.976370] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9540) 00:22:12.395 [2024-07-15 14:01:06.976383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.395 [2024-07-15 14:01:06.976403] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1119840, cid 3, qid 0 00:22:12.395 [2024-07-15 14:01:06.976492] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.395 [2024-07-15 14:01:06.976505] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.395 [2024-07-15 14:01:06.976511] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:06.976518] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1119840) on tqpair=0x10b9540 00:22:12.395 [2024-07-15 14:01:06.976534] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:06.976542] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:06.976548] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9540) 00:22:12.395 [2024-07-15 14:01:06.976558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.395 [2024-07-15 14:01:06.976577] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1119840, cid 3, qid 0 00:22:12.395 [2024-07-15 14:01:06.976665] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.395 [2024-07-15 14:01:06.976679] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.395 [2024-07-15 14:01:06.976685] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:06.976692] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1119840) on tqpair=0x10b9540 00:22:12.395 [2024-07-15 14:01:06.976707] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:06.976716] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:06.976744] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9540) 00:22:12.395 [2024-07-15 14:01:06.976756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.395 [2024-07-15 14:01:06.976778] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1119840, cid 3, qid 0 00:22:12.395 [2024-07-15 14:01:06.976889] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.395 [2024-07-15 14:01:06.976902] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.395 [2024-07-15 14:01:06.976908] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:06.976915] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1119840) on tqpair=0x10b9540 00:22:12.395 [2024-07-15 14:01:06.976931] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:06.976940] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:06.976947] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9540) 00:22:12.395 [2024-07-15 14:01:06.976957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.395 [2024-07-15 14:01:06.976978] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1119840, cid 3, qid 0 00:22:12.395 [2024-07-15 14:01:06.977118] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.395 [2024-07-15 14:01:06.977133] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.395 [2024-07-15 14:01:06.977139] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:06.977146] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1119840) on tqpair=0x10b9540 00:22:12.395 [2024-07-15 14:01:06.977162] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:06.977171] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:06.977177] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9540) 00:22:12.395 [2024-07-15 14:01:06.977187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.395 [2024-07-15 14:01:06.977210] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1119840, cid 3, qid 0 00:22:12.395 [2024-07-15 14:01:06.980749] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.395 [2024-07-15 14:01:06.980767] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.395 [2024-07-15 14:01:06.980774] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:06.980781] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1119840) on tqpair=0x10b9540 00:22:12.395 [2024-07-15 14:01:06.980799] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:06.980809] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:06.980815] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10b9540) 00:22:12.395 [2024-07-15 14:01:06.980826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.395 [2024-07-15 14:01:06.980848] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1119840, cid 3, qid 0 00:22:12.395 [2024-07-15 14:01:06.980980] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.395 [2024-07-15 14:01:06.980995] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.395 [2024-07-15 14:01:06.981001] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:06.981008] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1119840) on tqpair=0x10b9540 00:22:12.395 [2024-07-15 14:01:06.981036] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:22:12.395 00:22:12.395 14:01:06 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:12.395 [2024-07-15 14:01:07.017345] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:22:12.395 [2024-07-15 14:01:07.017389] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3810027 ] 00:22:12.395 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.395 [2024-07-15 14:01:07.051500] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:12.395 [2024-07-15 14:01:07.051552] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:12.395 [2024-07-15 14:01:07.051561] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:12.395 [2024-07-15 14:01:07.051575] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:12.395 [2024-07-15 14:01:07.051584] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:12.395 [2024-07-15 14:01:07.051850] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:12.395 [2024-07-15 14:01:07.051892] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x15b5540 0 00:22:12.395 [2024-07-15 14:01:07.058753] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:12.395 [2024-07-15 14:01:07.058773] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:12.395 [2024-07-15 14:01:07.058780] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:12.395 [2024-07-15 14:01:07.058786] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:12.395 [2024-07-15 14:01:07.058825] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:07.058839] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.395 [2024-07-15 14:01:07.058847] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b5540) 00:22:12.395 [2024-07-15 14:01:07.058860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:12.395 [2024-07-15 14:01:07.058886] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16153c0, cid 0, qid 0 00:22:12.395 [2024-07-15 14:01:07.069756] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.395 [2024-07-15 14:01:07.069775] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.395 [2024-07-15 14:01:07.069783] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.396 [2024-07-15 14:01:07.069789] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16153c0) on tqpair=0x15b5540 00:22:12.396 [2024-07-15 14:01:07.069808] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:12.396 [2024-07-15 14:01:07.069820] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:12.396 [2024-07-15 14:01:07.069829] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:12.396 [2024-07-15 14:01:07.069846] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.396 [2024-07-15 14:01:07.069854] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.396 [2024-07-15 14:01:07.069861] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b5540) 00:22:12.396 [2024-07-15 14:01:07.069871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.396 [2024-07-15 14:01:07.069895] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16153c0, cid 0, qid 0 00:22:12.396 [2024-07-15 14:01:07.070109] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.396 [2024-07-15 14:01:07.070121] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.396 [2024-07-15 14:01:07.070128] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.396 [2024-07-15 14:01:07.070134] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16153c0) on tqpair=0x15b5540 00:22:12.396 [2024-07-15 14:01:07.070141] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:12.396 [2024-07-15 14:01:07.070153] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:12.396 [2024-07-15 14:01:07.070164] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.396 [2024-07-15 14:01:07.070171] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.396 [2024-07-15 14:01:07.070177] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b5540) 00:22:12.396 [2024-07-15 14:01:07.070187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.396 [2024-07-15 14:01:07.070207] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16153c0, cid 0, qid 0 00:22:12.396 [2024-07-15 14:01:07.070312] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.396 [2024-07-15 14:01:07.070324] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.396 [2024-07-15 14:01:07.070330] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.396 [2024-07-15 14:01:07.070336] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16153c0) on tqpair=0x15b5540 00:22:12.396 [2024-07-15 14:01:07.070344] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:12.396 [2024-07-15 14:01:07.070357] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:12.396 [2024-07-15 14:01:07.070367] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.396 [2024-07-15 14:01:07.070374] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.396 [2024-07-15 14:01:07.070384] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b5540) 00:22:12.396 [2024-07-15 14:01:07.070394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.396 [2024-07-15 14:01:07.070414] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16153c0, cid 0, qid 0 00:22:12.396 [2024-07-15 14:01:07.070504] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.396 [2024-07-15 14:01:07.070518] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.396 [2024-07-15 14:01:07.070524] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.396 [2024-07-15 14:01:07.070531] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16153c0) on tqpair=0x15b5540 00:22:12.396 [2024-07-15 14:01:07.070538] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:12.396 [2024-07-15 14:01:07.070555] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.396 [2024-07-15 14:01:07.070563] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.396 [2024-07-15 14:01:07.070569] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b5540) 00:22:12.396 [2024-07-15 14:01:07.070578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.396 [2024-07-15 14:01:07.070598] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16153c0, cid 0, qid 0 00:22:12.396 [2024-07-15 14:01:07.070685] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.396 [2024-07-15 14:01:07.070696] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.396 [2024-07-15 14:01:07.070703] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.396 [2024-07-15 14:01:07.070709] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16153c0) on tqpair=0x15b5540 00:22:12.396 [2024-07-15 14:01:07.070715] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:12.396 [2024-07-15 14:01:07.070747] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:12.396 [2024-07-15 14:01:07.070762] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:12.396 [2024-07-15 14:01:07.070871] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:12.396 [2024-07-15 14:01:07.070878] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:12.396 [2024-07-15 14:01:07.070889] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.396 [2024-07-15 14:01:07.070896] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.396 [2024-07-15 14:01:07.070902] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b5540) 00:22:12.396 [2024-07-15 14:01:07.070912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.396 [2024-07-15 14:01:07.070933] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16153c0, cid 0, qid 0 00:22:12.396 [2024-07-15 14:01:07.071114] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.396 [2024-07-15 14:01:07.071129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.396 [2024-07-15 14:01:07.071135] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.396 [2024-07-15 14:01:07.071142] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16153c0) on tqpair=0x15b5540 00:22:12.396 [2024-07-15 14:01:07.071149] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:12.396 [2024-07-15 14:01:07.071165] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.396 [2024-07-15 14:01:07.071177] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.396 [2024-07-15 14:01:07.071184] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b5540) 00:22:12.396 [2024-07-15 14:01:07.071194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.396 [2024-07-15 14:01:07.071213] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16153c0, cid 0, qid 0 00:22:12.396 [2024-07-15 14:01:07.071303] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.396 [2024-07-15 14:01:07.071317] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.396 [2024-07-15 14:01:07.071323] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.396 [2024-07-15 14:01:07.071329] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16153c0) on tqpair=0x15b5540 00:22:12.396 [2024-07-15 14:01:07.071336] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:12.396 [2024-07-15 14:01:07.071344] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:12.396 [2024-07-15 14:01:07.071357] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:12.396 [2024-07-15 14:01:07.071373] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:12.396 [2024-07-15 14:01:07.071386] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.396 [2024-07-15 14:01:07.071394] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b5540) 00:22:12.396 [2024-07-15 14:01:07.071404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.396 [2024-07-15 14:01:07.071424] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16153c0, cid 0, qid 0 00:22:12.396 [2024-07-15 14:01:07.071556] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.396 [2024-07-15 14:01:07.071570] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.396 [2024-07-15 14:01:07.071577] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.396 [2024-07-15 14:01:07.071583] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15b5540): datao=0, datal=4096, cccid=0 00:22:12.396 [2024-07-15 14:01:07.071590] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16153c0) on tqpair(0x15b5540): expected_datao=0, payload_size=4096 00:22:12.396 [2024-07-15 14:01:07.071597] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.396 [2024-07-15 14:01:07.071617] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.396 [2024-07-15 14:01:07.071625] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.396 [2024-07-15 14:01:07.111882] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.396 [2024-07-15 14:01:07.111901] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.396 [2024-07-15 14:01:07.111908] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.396 [2024-07-15 14:01:07.111915] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16153c0) on tqpair=0x15b5540 00:22:12.396 [2024-07-15 14:01:07.111926] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:12.396 [2024-07-15 14:01:07.111939] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:12.396 [2024-07-15 14:01:07.111947] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:12.396 [2024-07-15 14:01:07.111953] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:12.396 [2024-07-15 14:01:07.111960] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:12.396 [2024-07-15 14:01:07.111968] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:12.396 [2024-07-15 14:01:07.111986] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:12.396 [2024-07-15 14:01:07.111998] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.396 [2024-07-15 14:01:07.112005] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.396 [2024-07-15 14:01:07.112011] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b5540) 00:22:12.396 [2024-07-15 14:01:07.112037] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:12.396 [2024-07-15 14:01:07.112059] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16153c0, cid 0, qid 0 00:22:12.396 [2024-07-15 14:01:07.112158] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.396 [2024-07-15 14:01:07.112170] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.396 [2024-07-15 14:01:07.112176] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.397 [2024-07-15 14:01:07.112183] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16153c0) on tqpair=0x15b5540 00:22:12.397 [2024-07-15 14:01:07.112192] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.397 [2024-07-15 14:01:07.112199] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.397 [2024-07-15 14:01:07.112205] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15b5540) 00:22:12.397 [2024-07-15 14:01:07.112214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.397 [2024-07-15 14:01:07.112223] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.397 [2024-07-15 14:01:07.112230] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.397 [2024-07-15 14:01:07.112235] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x15b5540) 00:22:12.397 [2024-07-15 14:01:07.112243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.397 [2024-07-15 14:01:07.112252] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.397 [2024-07-15 14:01:07.112259] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.397 [2024-07-15 14:01:07.112264] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x15b5540) 00:22:12.397 [2024-07-15 14:01:07.112272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.397 [2024-07-15 14:01:07.112281] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.397 [2024-07-15 14:01:07.112287] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.397 [2024-07-15 14:01:07.112293] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b5540) 00:22:12.397 [2024-07-15 14:01:07.112301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.397 [2024-07-15 14:01:07.112310] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:12.397 [2024-07-15 14:01:07.112327] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:12.397 [2024-07-15 14:01:07.112339] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.397 [2024-07-15 14:01:07.112346] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15b5540) 00:22:12.397 [2024-07-15 14:01:07.112355] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.397 [2024-07-15 14:01:07.112377] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16153c0, cid 0, qid 0 00:22:12.397 [2024-07-15 14:01:07.112388] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1615540, cid 1, qid 0 00:22:12.397 [2024-07-15 14:01:07.112398] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16156c0, cid 2, qid 0 00:22:12.397 [2024-07-15 14:01:07.112406] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1615840, cid 3, qid 0 00:22:12.397 [2024-07-15 14:01:07.112414] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16159c0, cid 4, qid 0 00:22:12.397 [2024-07-15 14:01:07.112621] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.397 [2024-07-15 14:01:07.112632] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.397 [2024-07-15 14:01:07.112638] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.397 [2024-07-15 14:01:07.112644] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16159c0) on tqpair=0x15b5540 00:22:12.397 [2024-07-15 14:01:07.112652] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:12.397 [2024-07-15 14:01:07.112660] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:12.397 [2024-07-15 14:01:07.112673] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:12.397 [2024-07-15 14:01:07.112682] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:12.397 [2024-07-15 14:01:07.112692] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.397 [2024-07-15 14:01:07.112698] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.397 [2024-07-15 14:01:07.112704] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15b5540) 00:22:12.397 [2024-07-15 14:01:07.112714] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:12.397 [2024-07-15 14:01:07.112758] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16159c0, cid 4, qid 0 00:22:12.397 [2024-07-15 14:01:07.112935] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.397 [2024-07-15 14:01:07.112950] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.397 [2024-07-15 14:01:07.112956] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.397 [2024-07-15 14:01:07.112963] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16159c0) on tqpair=0x15b5540 00:22:12.397 [2024-07-15 14:01:07.113039] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:12.397 [2024-07-15 14:01:07.113057] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:12.397 [2024-07-15 14:01:07.113071] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.397 [2024-07-15 14:01:07.113078] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15b5540) 00:22:12.397 [2024-07-15 14:01:07.113088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.397 [2024-07-15 14:01:07.113108] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16159c0, cid 4, qid 0 00:22:12.397 [2024-07-15 14:01:07.113288] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.397 [2024-07-15 14:01:07.113302] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.397 [2024-07-15 14:01:07.113308] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.397 [2024-07-15 14:01:07.113314] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15b5540): datao=0, datal=4096, cccid=4 00:22:12.397 [2024-07-15 14:01:07.113321] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16159c0) on tqpair(0x15b5540): expected_datao=0, payload_size=4096 00:22:12.397 [2024-07-15 14:01:07.113328] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.397 [2024-07-15 14:01:07.113348] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.397 [2024-07-15 14:01:07.113357] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.397 [2024-07-15 14:01:07.153882] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.397 [2024-07-15 14:01:07.153900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.397 [2024-07-15 14:01:07.153908] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.397 [2024-07-15 14:01:07.153914] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16159c0) on tqpair=0x15b5540 00:22:12.397 [2024-07-15 14:01:07.153930] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:12.397 [2024-07-15 14:01:07.153947] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:12.397 [2024-07-15 14:01:07.153965] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:12.397 [2024-07-15 14:01:07.153979] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.397 [2024-07-15 14:01:07.153986] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15b5540) 00:22:12.397 [2024-07-15 14:01:07.153998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.397 [2024-07-15 14:01:07.154020] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16159c0, cid 4, qid 0 00:22:12.397 [2024-07-15 14:01:07.154165] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.397 [2024-07-15 14:01:07.154179] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.397 [2024-07-15 14:01:07.154186] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.397 [2024-07-15 14:01:07.154192] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15b5540): datao=0, datal=4096, cccid=4 00:22:12.397 [2024-07-15 14:01:07.154199] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16159c0) on tqpair(0x15b5540): expected_datao=0, payload_size=4096 00:22:12.397 [2024-07-15 14:01:07.154206] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.398 [2024-07-15 14:01:07.154233] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.398 [2024-07-15 14:01:07.154242] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.398 [2024-07-15 14:01:07.194913] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.398 [2024-07-15 14:01:07.194932] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.398 [2024-07-15 14:01:07.194939] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.398 [2024-07-15 14:01:07.194946] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16159c0) on tqpair=0x15b5540 00:22:12.398 [2024-07-15 14:01:07.194967] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:12.398 [2024-07-15 14:01:07.194986] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:12.398 [2024-07-15 14:01:07.195000] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.398 [2024-07-15 14:01:07.195008] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15b5540) 00:22:12.398 [2024-07-15 14:01:07.195020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.398 [2024-07-15 14:01:07.195057] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16159c0, cid 4, qid 0 00:22:12.398 [2024-07-15 14:01:07.195181] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.398 [2024-07-15 14:01:07.195192] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.398 [2024-07-15 14:01:07.195199] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.398 [2024-07-15 14:01:07.195204] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15b5540): datao=0, datal=4096, cccid=4 00:22:12.398 [2024-07-15 14:01:07.195216] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16159c0) on tqpair(0x15b5540): expected_datao=0, payload_size=4096 00:22:12.398 [2024-07-15 14:01:07.195223] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.398 [2024-07-15 14:01:07.195239] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.398 [2024-07-15 14:01:07.195247] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.657 [2024-07-15 14:01:07.235891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.657 [2024-07-15 14:01:07.235909] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.657 [2024-07-15 14:01:07.235916] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.657 [2024-07-15 14:01:07.235923] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16159c0) on tqpair=0x15b5540 00:22:12.657 [2024-07-15 14:01:07.235936] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:12.657 [2024-07-15 14:01:07.235951] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:12.657 [2024-07-15 14:01:07.235967] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:12.657 [2024-07-15 14:01:07.235977] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:12.657 [2024-07-15 14:01:07.235986] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:12.657 [2024-07-15 14:01:07.235994] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:12.657 [2024-07-15 14:01:07.236002] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:12.657 [2024-07-15 14:01:07.236010] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:12.657 [2024-07-15 14:01:07.236018] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:12.657 [2024-07-15 14:01:07.236050] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.657 [2024-07-15 14:01:07.236058] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15b5540) 00:22:12.657 [2024-07-15 14:01:07.236069] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.657 [2024-07-15 14:01:07.236079] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.657 [2024-07-15 14:01:07.236085] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.657 [2024-07-15 14:01:07.236091] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15b5540) 00:22:12.657 [2024-07-15 14:01:07.236100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.657 [2024-07-15 14:01:07.236137] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16159c0, cid 4, qid 0 00:22:12.657 [2024-07-15 14:01:07.236148] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1615b40, cid 5, qid 0 00:22:12.657 [2024-07-15 14:01:07.236323] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.657 [2024-07-15 14:01:07.236337] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.657 [2024-07-15 14:01:07.236343] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.657 [2024-07-15 14:01:07.236349] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16159c0) on tqpair=0x15b5540 00:22:12.657 [2024-07-15 14:01:07.236358] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.657 [2024-07-15 14:01:07.236367] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.657 [2024-07-15 14:01:07.236373] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.657 [2024-07-15 14:01:07.236382] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1615b40) on tqpair=0x15b5540 00:22:12.657 [2024-07-15 14:01:07.236398] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.657 [2024-07-15 14:01:07.236406] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15b5540) 00:22:12.657 [2024-07-15 14:01:07.236416] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.657 [2024-07-15 14:01:07.236435] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1615b40, cid 5, qid 0 00:22:12.657 [2024-07-15 14:01:07.236543] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.657 [2024-07-15 14:01:07.236555] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.657 [2024-07-15 14:01:07.236561] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.657 [2024-07-15 14:01:07.236567] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1615b40) on tqpair=0x15b5540 00:22:12.657 [2024-07-15 14:01:07.236582] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.657 [2024-07-15 14:01:07.236590] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15b5540) 00:22:12.657 [2024-07-15 14:01:07.236600] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.657 [2024-07-15 14:01:07.236619] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1615b40, cid 5, qid 0 00:22:12.657 [2024-07-15 14:01:07.236712] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.657 [2024-07-15 14:01:07.240747] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.657 [2024-07-15 14:01:07.240760] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.657 [2024-07-15 14:01:07.240767] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1615b40) on tqpair=0x15b5540 00:22:12.657 [2024-07-15 14:01:07.240786] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.657 [2024-07-15 14:01:07.240795] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15b5540) 00:22:12.657 [2024-07-15 14:01:07.240805] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.657 [2024-07-15 14:01:07.240828] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1615b40, cid 5, qid 0 00:22:12.657 [2024-07-15 14:01:07.240998] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.657 [2024-07-15 14:01:07.241028] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.657 [2024-07-15 14:01:07.241036] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.657 [2024-07-15 14:01:07.241042] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1615b40) on tqpair=0x15b5540 00:22:12.657 [2024-07-15 14:01:07.241067] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.657 [2024-07-15 14:01:07.241077] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15b5540) 00:22:12.657 [2024-07-15 14:01:07.241102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.657 [2024-07-15 14:01:07.241115] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.657 [2024-07-15 14:01:07.241122] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15b5540) 00:22:12.657 [2024-07-15 14:01:07.241131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.657 [2024-07-15 14:01:07.241141] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.657 [2024-07-15 14:01:07.241148] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x15b5540) 00:22:12.657 [2024-07-15 14:01:07.241157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.657 [2024-07-15 14:01:07.241171] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.657 [2024-07-15 14:01:07.241179] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x15b5540) 00:22:12.657 [2024-07-15 14:01:07.241188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.657 [2024-07-15 14:01:07.241209] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1615b40, cid 5, qid 0 00:22:12.657 [2024-07-15 14:01:07.241220] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16159c0, cid 4, qid 0 00:22:12.657 [2024-07-15 14:01:07.241227] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1615cc0, cid 6, qid 0 00:22:12.657 [2024-07-15 14:01:07.241234] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1615e40, cid 7, qid 0 00:22:12.658 [2024-07-15 14:01:07.241473] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.658 [2024-07-15 14:01:07.241488] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.658 [2024-07-15 14:01:07.241494] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.658 [2024-07-15 14:01:07.241500] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15b5540): datao=0, datal=8192, cccid=5 00:22:12.658 [2024-07-15 14:01:07.241507] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1615b40) on tqpair(0x15b5540): expected_datao=0, payload_size=8192 00:22:12.658 [2024-07-15 14:01:07.241514] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.658 [2024-07-15 14:01:07.241545] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.658 [2024-07-15 14:01:07.241555] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.658 [2024-07-15 14:01:07.241563] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.658 [2024-07-15 14:01:07.241571] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.658 [2024-07-15 14:01:07.241577] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.658 [2024-07-15 14:01:07.241583] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15b5540): datao=0, datal=512, cccid=4 00:22:12.658 [2024-07-15 14:01:07.241590] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16159c0) on tqpair(0x15b5540): expected_datao=0, payload_size=512 00:22:12.658 [2024-07-15 14:01:07.241596] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.658 [2024-07-15 14:01:07.241605] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.658 [2024-07-15 14:01:07.241611] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.658 [2024-07-15 14:01:07.241619] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.658 [2024-07-15 14:01:07.241627] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.658 [2024-07-15 14:01:07.241632] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.658 [2024-07-15 14:01:07.241638] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15b5540): datao=0, datal=512, cccid=6 00:22:12.658 [2024-07-15 14:01:07.241645] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1615cc0) on tqpair(0x15b5540): expected_datao=0, payload_size=512 00:22:12.658 [2024-07-15 14:01:07.241652] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.658 [2024-07-15 14:01:07.241660] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.658 [2024-07-15 14:01:07.241666] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.658 [2024-07-15 14:01:07.241674] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:12.658 [2024-07-15 14:01:07.241682] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:12.658 [2024-07-15 14:01:07.241688] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:12.658 [2024-07-15 14:01:07.241693] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15b5540): datao=0, datal=4096, cccid=7 00:22:12.658 [2024-07-15 14:01:07.241700] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1615e40) on tqpair(0x15b5540): expected_datao=0, payload_size=4096 00:22:12.658 [2024-07-15 14:01:07.241710] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.658 [2024-07-15 14:01:07.241735] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:12.658 [2024-07-15 14:01:07.241752] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:12.658 [2024-07-15 14:01:07.241765] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.658 [2024-07-15 14:01:07.241774] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.658 [2024-07-15 14:01:07.241781] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.658 [2024-07-15 14:01:07.241787] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1615b40) on tqpair=0x15b5540 00:22:12.658 [2024-07-15 14:01:07.241811] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.658 [2024-07-15 14:01:07.241822] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.658 [2024-07-15 14:01:07.241829] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.658 [2024-07-15 14:01:07.241835] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16159c0) on tqpair=0x15b5540 00:22:12.658 [2024-07-15 14:01:07.241850] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.658 [2024-07-15 14:01:07.241860] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.658 [2024-07-15 14:01:07.241866] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.658 [2024-07-15 14:01:07.241873] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1615cc0) on tqpair=0x15b5540 00:22:12.658 [2024-07-15 14:01:07.241883] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.658 [2024-07-15 14:01:07.241892] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.658 [2024-07-15 14:01:07.241898] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.658 [2024-07-15 14:01:07.241904] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1615e40) on tqpair=0x15b5540 00:22:12.658 ===================================================== 00:22:12.658 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:12.658 ===================================================== 00:22:12.658 Controller Capabilities/Features 00:22:12.658 ================================ 00:22:12.658 Vendor ID: 8086 00:22:12.658 Subsystem Vendor ID: 8086 00:22:12.658 Serial Number: SPDK00000000000001 00:22:12.658 Model Number: SPDK bdev Controller 00:22:12.658 Firmware Version: 24.09 00:22:12.658 Recommended Arb Burst: 6 00:22:12.658 IEEE OUI Identifier: e4 d2 5c 00:22:12.658 Multi-path I/O 00:22:12.658 May have multiple subsystem ports: Yes 00:22:12.658 May have multiple controllers: Yes 00:22:12.658 Associated with SR-IOV VF: No 00:22:12.658 Max Data Transfer Size: 131072 00:22:12.658 Max Number of Namespaces: 32 00:22:12.658 Max Number of I/O Queues: 127 00:22:12.658 NVMe Specification Version (VS): 1.3 00:22:12.658 NVMe Specification Version (Identify): 1.3 00:22:12.658 Maximum Queue Entries: 128 00:22:12.658 Contiguous Queues Required: Yes 00:22:12.658 Arbitration Mechanisms Supported 00:22:12.658 Weighted Round Robin: Not Supported 00:22:12.658 Vendor Specific: Not Supported 00:22:12.658 Reset Timeout: 15000 ms 00:22:12.658 Doorbell Stride: 4 bytes 00:22:12.658 NVM Subsystem Reset: Not Supported 00:22:12.658 Command Sets Supported 00:22:12.658 NVM Command Set: Supported 00:22:12.658 Boot Partition: Not Supported 00:22:12.658 Memory Page Size Minimum: 4096 bytes 00:22:12.658 Memory Page Size Maximum: 4096 bytes 00:22:12.658 Persistent Memory Region: Not Supported 00:22:12.658 Optional Asynchronous Events Supported 00:22:12.658 Namespace Attribute Notices: Supported 00:22:12.658 Firmware Activation Notices: Not Supported 00:22:12.658 ANA Change Notices: Not Supported 00:22:12.658 PLE Aggregate Log Change Notices: Not Supported 00:22:12.658 LBA Status Info Alert Notices: Not Supported 00:22:12.658 EGE Aggregate Log Change Notices: Not Supported 00:22:12.658 Normal NVM Subsystem Shutdown event: Not Supported 00:22:12.658 Zone Descriptor Change Notices: Not Supported 00:22:12.658 Discovery Log Change Notices: Not Supported 00:22:12.658 Controller Attributes 00:22:12.658 128-bit Host Identifier: Supported 00:22:12.658 Non-Operational Permissive Mode: Not Supported 00:22:12.658 NVM Sets: Not Supported 00:22:12.658 Read Recovery Levels: Not Supported 00:22:12.658 Endurance Groups: Not Supported 00:22:12.658 Predictable Latency Mode: Not Supported 00:22:12.658 Traffic Based Keep ALive: Not Supported 00:22:12.658 Namespace Granularity: Not Supported 00:22:12.658 SQ Associations: Not Supported 00:22:12.658 UUID List: Not Supported 00:22:12.658 Multi-Domain Subsystem: Not Supported 00:22:12.658 Fixed Capacity Management: Not Supported 00:22:12.658 Variable Capacity Management: Not Supported 00:22:12.658 Delete Endurance Group: Not Supported 00:22:12.658 Delete NVM Set: Not Supported 00:22:12.658 Extended LBA Formats Supported: Not Supported 00:22:12.658 Flexible Data Placement Supported: Not Supported 00:22:12.658 00:22:12.658 Controller Memory Buffer Support 00:22:12.658 ================================ 00:22:12.658 Supported: No 00:22:12.658 00:22:12.658 Persistent Memory Region Support 00:22:12.658 ================================ 00:22:12.658 Supported: No 00:22:12.658 00:22:12.658 Admin Command Set Attributes 00:22:12.658 ============================ 00:22:12.658 Security Send/Receive: Not Supported 00:22:12.658 Format NVM: Not Supported 00:22:12.658 Firmware Activate/Download: Not Supported 00:22:12.658 Namespace Management: Not Supported 00:22:12.658 Device Self-Test: Not Supported 00:22:12.658 Directives: Not Supported 00:22:12.658 NVMe-MI: Not Supported 00:22:12.658 Virtualization Management: Not Supported 00:22:12.658 Doorbell Buffer Config: Not Supported 00:22:12.658 Get LBA Status Capability: Not Supported 00:22:12.658 Command & Feature Lockdown Capability: Not Supported 00:22:12.658 Abort Command Limit: 4 00:22:12.658 Async Event Request Limit: 4 00:22:12.658 Number of Firmware Slots: N/A 00:22:12.658 Firmware Slot 1 Read-Only: N/A 00:22:12.658 Firmware Activation Without Reset: N/A 00:22:12.658 Multiple Update Detection Support: N/A 00:22:12.658 Firmware Update Granularity: No Information Provided 00:22:12.658 Per-Namespace SMART Log: No 00:22:12.658 Asymmetric Namespace Access Log Page: Not Supported 00:22:12.658 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:12.658 Command Effects Log Page: Supported 00:22:12.658 Get Log Page Extended Data: Supported 00:22:12.658 Telemetry Log Pages: Not Supported 00:22:12.658 Persistent Event Log Pages: Not Supported 00:22:12.658 Supported Log Pages Log Page: May Support 00:22:12.658 Commands Supported & Effects Log Page: Not Supported 00:22:12.658 Feature Identifiers & Effects Log Page:May Support 00:22:12.658 NVMe-MI Commands & Effects Log Page: May Support 00:22:12.658 Data Area 4 for Telemetry Log: Not Supported 00:22:12.658 Error Log Page Entries Supported: 128 00:22:12.658 Keep Alive: Supported 00:22:12.658 Keep Alive Granularity: 10000 ms 00:22:12.658 00:22:12.658 NVM Command Set Attributes 00:22:12.658 ========================== 00:22:12.658 Submission Queue Entry Size 00:22:12.658 Max: 64 00:22:12.658 Min: 64 00:22:12.658 Completion Queue Entry Size 00:22:12.659 Max: 16 00:22:12.659 Min: 16 00:22:12.659 Number of Namespaces: 32 00:22:12.659 Compare Command: Supported 00:22:12.659 Write Uncorrectable Command: Not Supported 00:22:12.659 Dataset Management Command: Supported 00:22:12.659 Write Zeroes Command: Supported 00:22:12.659 Set Features Save Field: Not Supported 00:22:12.659 Reservations: Supported 00:22:12.659 Timestamp: Not Supported 00:22:12.659 Copy: Supported 00:22:12.659 Volatile Write Cache: Present 00:22:12.659 Atomic Write Unit (Normal): 1 00:22:12.659 Atomic Write Unit (PFail): 1 00:22:12.659 Atomic Compare & Write Unit: 1 00:22:12.659 Fused Compare & Write: Supported 00:22:12.659 Scatter-Gather List 00:22:12.659 SGL Command Set: Supported 00:22:12.659 SGL Keyed: Supported 00:22:12.659 SGL Bit Bucket Descriptor: Not Supported 00:22:12.659 SGL Metadata Pointer: Not Supported 00:22:12.659 Oversized SGL: Not Supported 00:22:12.659 SGL Metadata Address: Not Supported 00:22:12.659 SGL Offset: Supported 00:22:12.659 Transport SGL Data Block: Not Supported 00:22:12.659 Replay Protected Memory Block: Not Supported 00:22:12.659 00:22:12.659 Firmware Slot Information 00:22:12.659 ========================= 00:22:12.659 Active slot: 1 00:22:12.659 Slot 1 Firmware Revision: 24.09 00:22:12.659 00:22:12.659 00:22:12.659 Commands Supported and Effects 00:22:12.659 ============================== 00:22:12.659 Admin Commands 00:22:12.659 -------------- 00:22:12.659 Get Log Page (02h): Supported 00:22:12.659 Identify (06h): Supported 00:22:12.659 Abort (08h): Supported 00:22:12.659 Set Features (09h): Supported 00:22:12.659 Get Features (0Ah): Supported 00:22:12.659 Asynchronous Event Request (0Ch): Supported 00:22:12.659 Keep Alive (18h): Supported 00:22:12.659 I/O Commands 00:22:12.659 ------------ 00:22:12.659 Flush (00h): Supported LBA-Change 00:22:12.659 Write (01h): Supported LBA-Change 00:22:12.659 Read (02h): Supported 00:22:12.659 Compare (05h): Supported 00:22:12.659 Write Zeroes (08h): Supported LBA-Change 00:22:12.659 Dataset Management (09h): Supported LBA-Change 00:22:12.659 Copy (19h): Supported LBA-Change 00:22:12.659 00:22:12.659 Error Log 00:22:12.659 ========= 00:22:12.659 00:22:12.659 Arbitration 00:22:12.659 =========== 00:22:12.659 Arbitration Burst: 1 00:22:12.659 00:22:12.659 Power Management 00:22:12.659 ================ 00:22:12.659 Number of Power States: 1 00:22:12.659 Current Power State: Power State #0 00:22:12.659 Power State #0: 00:22:12.659 Max Power: 0.00 W 00:22:12.659 Non-Operational State: Operational 00:22:12.659 Entry Latency: Not Reported 00:22:12.659 Exit Latency: Not Reported 00:22:12.659 Relative Read Throughput: 0 00:22:12.659 Relative Read Latency: 0 00:22:12.659 Relative Write Throughput: 0 00:22:12.659 Relative Write Latency: 0 00:22:12.659 Idle Power: Not Reported 00:22:12.659 Active Power: Not Reported 00:22:12.659 Non-Operational Permissive Mode: Not Supported 00:22:12.659 00:22:12.659 Health Information 00:22:12.659 ================== 00:22:12.659 Critical Warnings: 00:22:12.659 Available Spare Space: OK 00:22:12.659 Temperature: OK 00:22:12.659 Device Reliability: OK 00:22:12.659 Read Only: No 00:22:12.659 Volatile Memory Backup: OK 00:22:12.659 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:12.659 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:12.659 Available Spare: 0% 00:22:12.659 Available Spare Threshold: 0% 00:22:12.659 Life Percentage Used:[2024-07-15 14:01:07.242018] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.659 [2024-07-15 14:01:07.242044] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x15b5540) 00:22:12.659 [2024-07-15 14:01:07.242055] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.659 [2024-07-15 14:01:07.242077] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1615e40, cid 7, qid 0 00:22:12.659 [2024-07-15 14:01:07.242272] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.659 [2024-07-15 14:01:07.242284] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.659 [2024-07-15 14:01:07.242291] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.659 [2024-07-15 14:01:07.242297] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1615e40) on tqpair=0x15b5540 00:22:12.659 [2024-07-15 14:01:07.242339] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:12.659 [2024-07-15 14:01:07.242357] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16153c0) on tqpair=0x15b5540 00:22:12.659 [2024-07-15 14:01:07.242367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.659 [2024-07-15 14:01:07.242375] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1615540) on tqpair=0x15b5540 00:22:12.659 [2024-07-15 14:01:07.242382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.659 [2024-07-15 14:01:07.242390] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16156c0) on tqpair=0x15b5540 00:22:12.659 [2024-07-15 14:01:07.242397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.659 [2024-07-15 14:01:07.242405] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1615840) on tqpair=0x15b5540 00:22:12.659 [2024-07-15 14:01:07.242412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.659 [2024-07-15 14:01:07.242426] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.659 [2024-07-15 14:01:07.242434] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.659 [2024-07-15 14:01:07.242440] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b5540) 00:22:12.659 [2024-07-15 14:01:07.242450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.659 [2024-07-15 14:01:07.242471] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1615840, cid 3, qid 0 00:22:12.659 [2024-07-15 14:01:07.242631] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.659 [2024-07-15 14:01:07.242645] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.659 [2024-07-15 14:01:07.242651] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.659 [2024-07-15 14:01:07.242657] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1615840) on tqpair=0x15b5540 00:22:12.659 [2024-07-15 14:01:07.242667] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.659 [2024-07-15 14:01:07.242674] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.659 [2024-07-15 14:01:07.242680] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b5540) 00:22:12.659 [2024-07-15 14:01:07.242690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.659 [2024-07-15 14:01:07.242714] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1615840, cid 3, qid 0 00:22:12.659 [2024-07-15 14:01:07.242842] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.659 [2024-07-15 14:01:07.242857] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.659 [2024-07-15 14:01:07.242864] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.659 [2024-07-15 14:01:07.242871] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1615840) on tqpair=0x15b5540 00:22:12.659 [2024-07-15 14:01:07.242878] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:12.659 [2024-07-15 14:01:07.242886] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:12.659 [2024-07-15 14:01:07.242902] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.659 [2024-07-15 14:01:07.242911] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.659 [2024-07-15 14:01:07.242917] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b5540) 00:22:12.659 [2024-07-15 14:01:07.242928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.659 [2024-07-15 14:01:07.242949] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1615840, cid 3, qid 0 00:22:12.659 [2024-07-15 14:01:07.243069] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.659 [2024-07-15 14:01:07.243080] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.659 [2024-07-15 14:01:07.243087] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.659 [2024-07-15 14:01:07.243093] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1615840) on tqpair=0x15b5540 00:22:12.659 [2024-07-15 14:01:07.243108] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.659 [2024-07-15 14:01:07.243117] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.659 [2024-07-15 14:01:07.243123] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b5540) 00:22:12.659 [2024-07-15 14:01:07.243132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.659 [2024-07-15 14:01:07.243152] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1615840, cid 3, qid 0 00:22:12.659 [2024-07-15 14:01:07.243245] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.659 [2024-07-15 14:01:07.243261] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.659 [2024-07-15 14:01:07.243268] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.659 [2024-07-15 14:01:07.243275] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1615840) on tqpair=0x15b5540 00:22:12.659 [2024-07-15 14:01:07.243291] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.659 [2024-07-15 14:01:07.243299] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.659 [2024-07-15 14:01:07.243305] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b5540) 00:22:12.659 [2024-07-15 14:01:07.243314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.659 [2024-07-15 14:01:07.243334] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1615840, cid 3, qid 0 00:22:12.659 [2024-07-15 14:01:07.243419] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.659 [2024-07-15 14:01:07.243433] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.659 [2024-07-15 14:01:07.243439] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.659 [2024-07-15 14:01:07.243445] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1615840) on tqpair=0x15b5540 00:22:12.659 [2024-07-15 14:01:07.243461] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.659 [2024-07-15 14:01:07.243469] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.659 [2024-07-15 14:01:07.243475] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b5540) 00:22:12.660 [2024-07-15 14:01:07.243485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.660 [2024-07-15 14:01:07.243504] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1615840, cid 3, qid 0 00:22:12.660 [2024-07-15 14:01:07.243594] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.660 [2024-07-15 14:01:07.243604] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.660 [2024-07-15 14:01:07.243611] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.660 [2024-07-15 14:01:07.243617] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1615840) on tqpair=0x15b5540 00:22:12.660 [2024-07-15 14:01:07.243632] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.660 [2024-07-15 14:01:07.243640] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.660 [2024-07-15 14:01:07.243646] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b5540) 00:22:12.660 [2024-07-15 14:01:07.243656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.660 [2024-07-15 14:01:07.243675] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1615840, cid 3, qid 0 00:22:12.660 [2024-07-15 14:01:07.243787] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.660 [2024-07-15 14:01:07.243803] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.660 [2024-07-15 14:01:07.243809] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.660 [2024-07-15 14:01:07.243816] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1615840) on tqpair=0x15b5540 00:22:12.660 [2024-07-15 14:01:07.243833] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.660 [2024-07-15 14:01:07.243842] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.660 [2024-07-15 14:01:07.243849] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b5540) 00:22:12.660 [2024-07-15 14:01:07.243859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.660 [2024-07-15 14:01:07.243880] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1615840, cid 3, qid 0 00:22:12.660 [2024-07-15 14:01:07.243974] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.660 [2024-07-15 14:01:07.243989] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.660 [2024-07-15 14:01:07.243995] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.660 [2024-07-15 14:01:07.244005] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1615840) on tqpair=0x15b5540 00:22:12.660 [2024-07-15 14:01:07.244037] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.660 [2024-07-15 14:01:07.244046] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.660 [2024-07-15 14:01:07.244052] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b5540) 00:22:12.660 [2024-07-15 14:01:07.244063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.660 [2024-07-15 14:01:07.244098] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1615840, cid 3, qid 0 00:22:12.660 [2024-07-15 14:01:07.244201] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.660 [2024-07-15 14:01:07.244215] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.660 [2024-07-15 14:01:07.244221] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.660 [2024-07-15 14:01:07.244227] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1615840) on tqpair=0x15b5540 00:22:12.660 [2024-07-15 14:01:07.244243] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.660 [2024-07-15 14:01:07.244251] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.660 [2024-07-15 14:01:07.244257] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b5540) 00:22:12.660 [2024-07-15 14:01:07.244267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.660 [2024-07-15 14:01:07.244286] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1615840, cid 3, qid 0 00:22:12.660 [2024-07-15 14:01:07.244377] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.660 [2024-07-15 14:01:07.244391] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.660 [2024-07-15 14:01:07.244397] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.660 [2024-07-15 14:01:07.244403] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1615840) on tqpair=0x15b5540 00:22:12.660 [2024-07-15 14:01:07.244419] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.660 [2024-07-15 14:01:07.244427] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.660 [2024-07-15 14:01:07.244433] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b5540) 00:22:12.660 [2024-07-15 14:01:07.244442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.660 [2024-07-15 14:01:07.244462] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1615840, cid 3, qid 0 00:22:12.660 [2024-07-15 14:01:07.244550] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.660 [2024-07-15 14:01:07.244564] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.660 [2024-07-15 14:01:07.244570] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.660 [2024-07-15 14:01:07.244576] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1615840) on tqpair=0x15b5540 00:22:12.660 [2024-07-15 14:01:07.244592] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.660 [2024-07-15 14:01:07.244600] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.660 [2024-07-15 14:01:07.244606] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b5540) 00:22:12.660 [2024-07-15 14:01:07.244615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.660 [2024-07-15 14:01:07.244635] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1615840, cid 3, qid 0 00:22:12.660 [2024-07-15 14:01:07.244735] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.660 [2024-07-15 14:01:07.248759] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.660 [2024-07-15 14:01:07.248767] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.660 [2024-07-15 14:01:07.248773] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1615840) on tqpair=0x15b5540 00:22:12.660 [2024-07-15 14:01:07.248795] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:12.660 [2024-07-15 14:01:07.248805] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:12.660 [2024-07-15 14:01:07.248811] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15b5540) 00:22:12.660 [2024-07-15 14:01:07.248822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.660 [2024-07-15 14:01:07.248843] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1615840, cid 3, qid 0 00:22:12.660 [2024-07-15 14:01:07.249012] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:12.660 [2024-07-15 14:01:07.249041] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:12.660 [2024-07-15 14:01:07.249048] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:12.660 [2024-07-15 14:01:07.249054] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1615840) on tqpair=0x15b5540 00:22:12.660 [2024-07-15 14:01:07.249067] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:22:12.660 0% 00:22:12.660 Data Units Read: 0 00:22:12.660 Data Units Written: 0 00:22:12.660 Host Read Commands: 0 00:22:12.660 Host Write Commands: 0 00:22:12.660 Controller Busy Time: 0 minutes 00:22:12.660 Power Cycles: 0 00:22:12.660 Power On Hours: 0 hours 00:22:12.660 Unsafe Shutdowns: 0 00:22:12.660 Unrecoverable Media Errors: 0 00:22:12.660 Lifetime Error Log Entries: 0 00:22:12.660 Warning Temperature Time: 0 minutes 00:22:12.660 Critical Temperature Time: 0 minutes 00:22:12.660 00:22:12.660 Number of Queues 00:22:12.660 ================ 00:22:12.660 Number of I/O Submission Queues: 127 00:22:12.660 Number of I/O Completion Queues: 127 00:22:12.660 00:22:12.660 Active Namespaces 00:22:12.660 ================= 00:22:12.660 Namespace ID:1 00:22:12.660 Error Recovery Timeout: Unlimited 00:22:12.660 Command Set Identifier: NVM (00h) 00:22:12.660 Deallocate: Supported 00:22:12.660 Deallocated/Unwritten Error: Not Supported 00:22:12.660 Deallocated Read Value: Unknown 00:22:12.660 Deallocate in Write Zeroes: Not Supported 00:22:12.660 Deallocated Guard Field: 0xFFFF 00:22:12.660 Flush: Supported 00:22:12.660 Reservation: Supported 00:22:12.660 Namespace Sharing Capabilities: Multiple Controllers 00:22:12.660 Size (in LBAs): 131072 (0GiB) 00:22:12.660 Capacity (in LBAs): 131072 (0GiB) 00:22:12.660 Utilization (in LBAs): 131072 (0GiB) 00:22:12.660 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:12.660 EUI64: ABCDEF0123456789 00:22:12.660 UUID: 90237c9d-64aa-4b6c-bffd-9d11d613496f 00:22:12.660 Thin Provisioning: Not Supported 00:22:12.660 Per-NS Atomic Units: Yes 00:22:12.660 Atomic Boundary Size (Normal): 0 00:22:12.660 Atomic Boundary Size (PFail): 0 00:22:12.660 Atomic Boundary Offset: 0 00:22:12.660 Maximum Single Source Range Length: 65535 00:22:12.660 Maximum Copy Length: 65535 00:22:12.660 Maximum Source Range Count: 1 00:22:12.660 NGUID/EUI64 Never Reused: No 00:22:12.660 Namespace Write Protected: No 00:22:12.660 Number of LBA Formats: 1 00:22:12.660 Current LBA Format: LBA Format #00 00:22:12.660 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:12.660 00:22:12.660 14:01:07 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:12.660 14:01:07 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:12.660 14:01:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.660 14:01:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.660 14:01:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.660 14:01:07 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:12.660 14:01:07 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:12.660 14:01:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:12.660 14:01:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:12.660 14:01:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:12.660 14:01:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:12.660 14:01:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:12.660 14:01:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:12.660 rmmod nvme_tcp 00:22:12.660 rmmod nvme_fabrics 00:22:12.660 rmmod nvme_keyring 00:22:12.660 14:01:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:12.660 14:01:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:12.660 14:01:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:12.660 14:01:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3809867 ']' 00:22:12.660 14:01:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3809867 00:22:12.661 14:01:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 3809867 ']' 00:22:12.661 14:01:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 3809867 00:22:12.661 14:01:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:22:12.661 14:01:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:12.661 14:01:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3809867 00:22:12.661 14:01:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:12.661 14:01:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:12.661 14:01:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3809867' 00:22:12.661 killing process with pid 3809867 00:22:12.661 14:01:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 3809867 00:22:12.661 14:01:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 3809867 00:22:12.918 14:01:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:12.918 14:01:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:12.918 14:01:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:12.918 14:01:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:12.918 14:01:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:12.918 14:01:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.918 14:01:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:12.918 14:01:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.451 14:01:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:15.451 00:22:15.451 real 0m6.260s 00:22:15.451 user 0m7.664s 00:22:15.451 sys 0m2.012s 00:22:15.451 14:01:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:15.451 14:01:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:15.451 ************************************ 00:22:15.451 END TEST nvmf_identify 00:22:15.451 ************************************ 00:22:15.451 14:01:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:15.451 14:01:09 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:15.451 14:01:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:15.451 14:01:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:15.451 14:01:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:15.451 ************************************ 00:22:15.451 START TEST nvmf_perf 00:22:15.451 ************************************ 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:15.451 * Looking for test storage... 00:22:15.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:15.451 14:01:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:17.375 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:17.375 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:17.375 Found net devices under 0000:84:00.0: cvl_0_0 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:17.375 Found net devices under 0000:84:00.1: cvl_0_1 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:17.375 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:17.376 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:17.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:22:17.376 00:22:17.376 --- 10.0.0.2 ping statistics --- 00:22:17.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.376 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:22:17.376 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:17.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:22:17.376 00:22:17.376 --- 10.0.0.1 ping statistics --- 00:22:17.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.376 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:22:17.376 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.376 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:22:17.376 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:17.376 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.376 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:17.376 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:17.376 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.376 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:17.376 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:17.376 14:01:11 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:17.376 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:17.376 14:01:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:17.376 14:01:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:17.376 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3811971 00:22:17.376 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:17.376 14:01:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3811971 00:22:17.376 14:01:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 3811971 ']' 00:22:17.376 14:01:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.376 14:01:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:17.376 14:01:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.376 14:01:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:17.376 14:01:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:17.376 [2024-07-15 14:01:12.004336] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:22:17.376 [2024-07-15 14:01:12.004419] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.376 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.376 [2024-07-15 14:01:12.070359] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:17.376 [2024-07-15 14:01:12.180332] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.376 [2024-07-15 14:01:12.180385] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.376 [2024-07-15 14:01:12.180399] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.376 [2024-07-15 14:01:12.180409] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.376 [2024-07-15 14:01:12.180419] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.376 [2024-07-15 14:01:12.180508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.376 [2024-07-15 14:01:12.180570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.376 [2024-07-15 14:01:12.180634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:17.376 [2024-07-15 14:01:12.180637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.632 14:01:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:17.632 14:01:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:22:17.632 14:01:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:17.632 14:01:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:17.632 14:01:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:17.632 14:01:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.632 14:01:12 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:17.633 14:01:12 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:20.902 14:01:15 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:20.902 14:01:15 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:20.902 14:01:15 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:82:00.0 00:22:20.902 14:01:15 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:21.159 14:01:15 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:21.159 14:01:15 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:82:00.0 ']' 00:22:21.159 14:01:15 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:21.159 14:01:15 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:21.159 14:01:15 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:21.416 [2024-07-15 14:01:16.216345] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:21.416 14:01:16 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:21.673 14:01:16 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:21.673 14:01:16 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:21.931 14:01:16 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:21.931 14:01:16 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:22.188 14:01:16 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:22.445 [2024-07-15 14:01:17.214776] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:22.445 14:01:17 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:22.702 14:01:17 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:82:00.0 ']' 00:22:22.702 14:01:17 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:22:22.702 14:01:17 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:22.702 14:01:17 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:22:24.072 Initializing NVMe Controllers 00:22:24.072 Attached to NVMe Controller at 0000:82:00.0 [8086:0a54] 00:22:24.072 Associating PCIE (0000:82:00.0) NSID 1 with lcore 0 00:22:24.072 Initialization complete. Launching workers. 00:22:24.072 ======================================================== 00:22:24.072 Latency(us) 00:22:24.072 Device Information : IOPS MiB/s Average min max 00:22:24.072 PCIE (0000:82:00.0) NSID 1 from core 0: 85090.81 332.39 375.36 32.18 7362.63 00:22:24.072 ======================================================== 00:22:24.072 Total : 85090.81 332.39 375.36 32.18 7362.63 00:22:24.072 00:22:24.072 14:01:18 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:24.072 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.449 Initializing NVMe Controllers 00:22:25.449 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:25.449 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:25.449 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:25.449 Initialization complete. Launching workers. 00:22:25.449 ======================================================== 00:22:25.449 Latency(us) 00:22:25.449 Device Information : IOPS MiB/s Average min max 00:22:25.449 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 141.64 0.55 7130.93 146.93 45676.24 00:22:25.449 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 60.84 0.24 17088.65 7352.00 47901.13 00:22:25.449 ======================================================== 00:22:25.449 Total : 202.48 0.79 10123.15 146.93 47901.13 00:22:25.449 00:22:25.449 14:01:19 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:25.449 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.388 Initializing NVMe Controllers 00:22:26.388 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:26.388 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:26.388 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:26.388 Initialization complete. Launching workers. 00:22:26.388 ======================================================== 00:22:26.388 Latency(us) 00:22:26.388 Device Information : IOPS MiB/s Average min max 00:22:26.388 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8610.57 33.64 3716.98 443.37 11189.58 00:22:26.388 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3922.23 15.32 8171.15 6229.02 15774.81 00:22:26.388 ======================================================== 00:22:26.388 Total : 12532.80 48.96 5110.94 443.37 15774.81 00:22:26.388 00:22:26.646 14:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:26.646 14:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:26.646 14:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:26.646 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.177 Initializing NVMe Controllers 00:22:29.177 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:29.177 Controller IO queue size 128, less than required. 00:22:29.177 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:29.177 Controller IO queue size 128, less than required. 00:22:29.177 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:29.177 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:29.177 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:29.177 Initialization complete. Launching workers. 00:22:29.177 ======================================================== 00:22:29.177 Latency(us) 00:22:29.177 Device Information : IOPS MiB/s Average min max 00:22:29.177 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1397.10 349.28 93862.56 53854.49 135465.39 00:22:29.177 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 597.48 149.37 218844.55 74015.42 327720.74 00:22:29.177 ======================================================== 00:22:29.177 Total : 1994.58 498.64 131300.86 53854.49 327720.74 00:22:29.177 00:22:29.177 14:01:23 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:29.177 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.177 No valid NVMe controllers or AIO or URING devices found 00:22:29.177 Initializing NVMe Controllers 00:22:29.177 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:29.177 Controller IO queue size 128, less than required. 00:22:29.177 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:29.177 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:29.177 Controller IO queue size 128, less than required. 00:22:29.177 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:29.177 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:29.177 WARNING: Some requested NVMe devices were skipped 00:22:29.436 14:01:24 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:29.436 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.975 Initializing NVMe Controllers 00:22:31.975 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:31.975 Controller IO queue size 128, less than required. 00:22:31.975 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:31.975 Controller IO queue size 128, less than required. 00:22:31.975 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:31.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:31.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:31.975 Initialization complete. Launching workers. 00:22:31.975 00:22:31.975 ==================== 00:22:31.975 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:31.975 TCP transport: 00:22:31.975 polls: 10260 00:22:31.975 idle_polls: 6180 00:22:31.975 sock_completions: 4080 00:22:31.975 nvme_completions: 4965 00:22:31.975 submitted_requests: 7434 00:22:31.975 queued_requests: 1 00:22:31.975 00:22:31.975 ==================== 00:22:31.975 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:31.975 TCP transport: 00:22:31.975 polls: 12391 00:22:31.975 idle_polls: 8367 00:22:31.975 sock_completions: 4024 00:22:31.975 nvme_completions: 5341 00:22:31.975 submitted_requests: 8078 00:22:31.975 queued_requests: 1 00:22:31.975 ======================================================== 00:22:31.975 Latency(us) 00:22:31.975 Device Information : IOPS MiB/s Average min max 00:22:31.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1240.93 310.23 107648.64 66120.09 176631.03 00:22:31.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1334.93 333.73 96913.44 41926.06 149237.37 00:22:31.975 ======================================================== 00:22:31.975 Total : 2575.86 643.96 102085.17 41926.06 176631.03 00:22:31.975 00:22:31.975 14:01:26 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:31.975 14:01:26 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:32.232 14:01:27 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:32.232 14:01:27 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:32.232 14:01:27 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:32.232 14:01:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:32.232 14:01:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:22:32.232 14:01:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:32.232 14:01:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:22:32.232 14:01:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:32.233 14:01:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:32.233 rmmod nvme_tcp 00:22:32.233 rmmod nvme_fabrics 00:22:32.233 rmmod nvme_keyring 00:22:32.490 14:01:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:32.490 14:01:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:22:32.490 14:01:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:22:32.490 14:01:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3811971 ']' 00:22:32.490 14:01:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3811971 00:22:32.490 14:01:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 3811971 ']' 00:22:32.490 14:01:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 3811971 00:22:32.490 14:01:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:22:32.490 14:01:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:32.490 14:01:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3811971 00:22:32.490 14:01:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:32.490 14:01:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:32.490 14:01:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3811971' 00:22:32.490 killing process with pid 3811971 00:22:32.490 14:01:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 3811971 00:22:32.490 14:01:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 3811971 00:22:34.396 14:01:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:34.396 14:01:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:34.396 14:01:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:34.396 14:01:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:34.396 14:01:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:34.396 14:01:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.396 14:01:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:34.396 14:01:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.301 14:01:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:36.301 00:22:36.301 real 0m21.058s 00:22:36.301 user 1m4.695s 00:22:36.301 sys 0m5.655s 00:22:36.301 14:01:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:36.301 14:01:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:36.301 ************************************ 00:22:36.301 END TEST nvmf_perf 00:22:36.301 ************************************ 00:22:36.301 14:01:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:36.301 14:01:30 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:36.301 14:01:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:36.301 14:01:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:36.301 14:01:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:36.301 ************************************ 00:22:36.301 START TEST nvmf_fio_host 00:22:36.301 ************************************ 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:36.302 * Looking for test storage... 00:22:36.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:36.302 14:01:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:38.207 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:38.207 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:38.207 Found net devices under 0000:84:00.0: cvl_0_0 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.207 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:38.208 Found net devices under 0000:84:00.1: cvl_0_1 00:22:38.208 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.208 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:38.208 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:38.208 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:38.208 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:38.208 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:38.208 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:38.208 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.208 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:38.208 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:38.208 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:38.208 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:38.208 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:38.208 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:38.208 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.208 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:38.208 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:38.208 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:38.208 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:38.208 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:38.208 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:38.208 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:38.208 14:01:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:38.208 14:01:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:38.208 14:01:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:38.208 14:01:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:38.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:22:38.465 00:22:38.465 --- 10.0.0.2 ping statistics --- 00:22:38.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.465 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:22:38.465 14:01:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:38.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:22:38.465 00:22:38.465 --- 10.0.0.1 ping statistics --- 00:22:38.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.465 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:22:38.465 14:01:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.465 14:01:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:22:38.465 14:01:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:38.466 14:01:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.466 14:01:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:38.466 14:01:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:38.466 14:01:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.466 14:01:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:38.466 14:01:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:38.466 14:01:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:38.466 14:01:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:38.466 14:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:38.466 14:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.466 14:01:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3815946 00:22:38.466 14:01:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:38.466 14:01:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:38.466 14:01:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3815946 00:22:38.466 14:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 3815946 ']' 00:22:38.466 14:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.466 14:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:38.466 14:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.466 14:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:38.466 14:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.466 [2024-07-15 14:01:33.130654] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:22:38.466 [2024-07-15 14:01:33.130751] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.466 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.466 [2024-07-15 14:01:33.192891] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:38.466 [2024-07-15 14:01:33.297901] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.466 [2024-07-15 14:01:33.297950] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.466 [2024-07-15 14:01:33.297978] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.466 [2024-07-15 14:01:33.297989] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.466 [2024-07-15 14:01:33.297998] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.466 [2024-07-15 14:01:33.298081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.466 [2024-07-15 14:01:33.298159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.466 [2024-07-15 14:01:33.298221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.466 [2024-07-15 14:01:33.298218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:38.723 14:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:38.723 14:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:22:38.723 14:01:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:38.981 [2024-07-15 14:01:33.719402] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.981 14:01:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:38.981 14:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:38.981 14:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.981 14:01:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:39.239 Malloc1 00:22:39.239 14:01:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:39.498 14:01:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:39.755 14:01:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:40.013 [2024-07-15 14:01:34.777120] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.013 14:01:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:40.271 14:01:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:40.271 14:01:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:40.271 14:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:40.271 14:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:40.271 14:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:40.271 14:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:40.271 14:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:40.271 14:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:40.271 14:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:40.271 14:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:40.271 14:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:40.271 14:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:40.271 14:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:40.271 14:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:40.271 14:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:40.271 14:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:40.271 14:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:40.271 14:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:40.271 14:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:40.271 14:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:40.271 14:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:40.272 14:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:40.272 14:01:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:40.529 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:40.529 fio-3.35 00:22:40.529 Starting 1 thread 00:22:40.529 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.071 00:22:43.071 test: (groupid=0, jobs=1): err= 0: pid=3816306: Mon Jul 15 14:01:37 2024 00:22:43.071 read: IOPS=9132, BW=35.7MiB/s (37.4MB/s)(71.6MiB/2006msec) 00:22:43.071 slat (usec): min=2, max=135, avg= 2.98, stdev= 1.93 00:22:43.071 clat (usec): min=2417, max=13693, avg=7661.45, stdev=613.40 00:22:43.071 lat (usec): min=2441, max=13696, avg=7664.43, stdev=613.32 00:22:43.071 clat percentiles (usec): 00:22:43.071 | 1.00th=[ 6325], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7177], 00:22:43.071 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7832], 00:22:43.071 | 70.00th=[ 7963], 80.00th=[ 8094], 90.00th=[ 8356], 95.00th=[ 8586], 00:22:43.071 | 99.00th=[ 8979], 99.50th=[ 9241], 99.90th=[12125], 99.95th=[13173], 00:22:43.071 | 99.99th=[13566] 00:22:43.071 bw ( KiB/s): min=35480, max=37056, per=99.88%, avg=36488.00, stdev=692.11, samples=4 00:22:43.071 iops : min= 8870, max= 9264, avg=9122.00, stdev=173.03, samples=4 00:22:43.071 write: IOPS=9143, BW=35.7MiB/s (37.5MB/s)(71.6MiB/2006msec); 0 zone resets 00:22:43.071 slat (nsec): min=2312, max=96842, avg=3128.27, stdev=1483.36 00:22:43.071 clat (usec): min=1146, max=11856, avg=6256.85, stdev=507.46 00:22:43.071 lat (usec): min=1153, max=11859, avg=6259.97, stdev=507.42 00:22:43.071 clat percentiles (usec): 00:22:43.071 | 1.00th=[ 5145], 5.00th=[ 5473], 10.00th=[ 5669], 20.00th=[ 5866], 00:22:43.071 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6259], 60.00th=[ 6390], 00:22:43.071 | 70.00th=[ 6521], 80.00th=[ 6652], 90.00th=[ 6849], 95.00th=[ 6980], 00:22:43.071 | 99.00th=[ 7373], 99.50th=[ 7504], 99.90th=[ 9634], 99.95th=[10290], 00:22:43.071 | 99.99th=[11076] 00:22:43.071 bw ( KiB/s): min=36240, max=36864, per=100.00%, avg=36580.00, stdev=300.86, samples=4 00:22:43.071 iops : min= 9060, max= 9216, avg=9145.00, stdev=75.22, samples=4 00:22:43.071 lat (msec) : 2=0.01%, 4=0.13%, 10=99.72%, 20=0.14% 00:22:43.071 cpu : usr=67.88%, sys=29.83%, ctx=59, majf=0, minf=40 00:22:43.071 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:43.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.071 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:43.071 issued rwts: total=18320,18341,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.071 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:43.071 00:22:43.071 Run status group 0 (all jobs): 00:22:43.071 READ: bw=35.7MiB/s (37.4MB/s), 35.7MiB/s-35.7MiB/s (37.4MB/s-37.4MB/s), io=71.6MiB (75.0MB), run=2006-2006msec 00:22:43.071 WRITE: bw=35.7MiB/s (37.5MB/s), 35.7MiB/s-35.7MiB/s (37.5MB/s-37.5MB/s), io=71.6MiB (75.1MB), run=2006-2006msec 00:22:43.071 14:01:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:43.071 14:01:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:43.071 14:01:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:43.071 14:01:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:43.071 14:01:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:43.072 14:01:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:43.072 14:01:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:43.072 14:01:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:43.072 14:01:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:43.072 14:01:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:43.072 14:01:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:43.072 14:01:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:43.072 14:01:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:43.072 14:01:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:43.072 14:01:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:43.072 14:01:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:43.072 14:01:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:43.072 14:01:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:43.072 14:01:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:43.072 14:01:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:43.072 14:01:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:43.072 14:01:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:43.072 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:43.072 fio-3.35 00:22:43.072 Starting 1 thread 00:22:43.330 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.885 00:22:45.885 test: (groupid=0, jobs=1): err= 0: pid=3816649: Mon Jul 15 14:01:40 2024 00:22:45.885 read: IOPS=8114, BW=127MiB/s (133MB/s)(254MiB/2006msec) 00:22:45.885 slat (usec): min=2, max=152, avg= 4.44, stdev= 2.70 00:22:45.885 clat (usec): min=1777, max=17946, avg=9163.96, stdev=2320.37 00:22:45.885 lat (usec): min=1782, max=17950, avg=9168.40, stdev=2320.39 00:22:45.885 clat percentiles (usec): 00:22:45.885 | 1.00th=[ 5014], 5.00th=[ 5800], 10.00th=[ 6390], 20.00th=[ 7177], 00:22:45.885 | 30.00th=[ 7767], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9503], 00:22:45.885 | 70.00th=[10421], 80.00th=[11076], 90.00th=[12125], 95.00th=[13173], 00:22:45.885 | 99.00th=[16057], 99.50th=[16909], 99.90th=[17695], 99.95th=[17695], 00:22:45.885 | 99.99th=[17957] 00:22:45.885 bw ( KiB/s): min=58432, max=71744, per=50.87%, avg=66048.00, stdev=6234.94, samples=4 00:22:45.885 iops : min= 3652, max= 4484, avg=4128.00, stdev=389.68, samples=4 00:22:45.885 write: IOPS=4832, BW=75.5MiB/s (79.2MB/s)(136MiB/1796msec); 0 zone resets 00:22:45.885 slat (usec): min=30, max=206, avg=38.49, stdev= 6.91 00:22:45.885 clat (usec): min=5745, max=21344, avg=11681.78, stdev=1950.96 00:22:45.885 lat (usec): min=5782, max=21381, avg=11720.27, stdev=1950.78 00:22:45.885 clat percentiles (usec): 00:22:45.885 | 1.00th=[ 7832], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10028], 00:22:45.885 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11469], 60.00th=[11863], 00:22:45.885 | 70.00th=[12518], 80.00th=[13173], 90.00th=[14484], 95.00th=[15401], 00:22:45.885 | 99.00th=[16712], 99.50th=[17171], 99.90th=[19530], 99.95th=[19792], 00:22:45.885 | 99.99th=[21365] 00:22:45.885 bw ( KiB/s): min=61632, max=74560, per=89.02%, avg=68832.00, stdev=6361.81, samples=4 00:22:45.885 iops : min= 3852, max= 4660, avg=4302.00, stdev=397.61, samples=4 00:22:45.885 lat (msec) : 2=0.01%, 4=0.06%, 10=49.65%, 20=50.28%, 50=0.01% 00:22:45.885 cpu : usr=82.09%, sys=16.16%, ctx=46, majf=0, minf=67 00:22:45.885 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:22:45.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:45.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:45.885 issued rwts: total=16278,8679,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:45.885 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:45.885 00:22:45.885 Run status group 0 (all jobs): 00:22:45.885 READ: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=254MiB (267MB), run=2006-2006msec 00:22:45.885 WRITE: bw=75.5MiB/s (79.2MB/s), 75.5MiB/s-75.5MiB/s (79.2MB/s-79.2MB/s), io=136MiB (142MB), run=1796-1796msec 00:22:45.885 14:01:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:45.885 14:01:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:45.885 14:01:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:45.885 14:01:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:45.885 14:01:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:45.885 14:01:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:45.885 14:01:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:22:45.885 14:01:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:45.885 14:01:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:22:45.885 14:01:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:45.885 14:01:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:45.885 rmmod nvme_tcp 00:22:45.885 rmmod nvme_fabrics 00:22:45.885 rmmod nvme_keyring 00:22:45.885 14:01:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:45.885 14:01:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:22:45.885 14:01:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:22:45.885 14:01:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3815946 ']' 00:22:45.885 14:01:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3815946 00:22:45.885 14:01:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 3815946 ']' 00:22:45.885 14:01:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 3815946 00:22:45.885 14:01:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:22:45.885 14:01:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:45.885 14:01:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3815946 00:22:45.886 14:01:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:45.886 14:01:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:45.886 14:01:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3815946' 00:22:45.886 killing process with pid 3815946 00:22:45.886 14:01:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 3815946 00:22:45.886 14:01:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 3815946 00:22:46.152 14:01:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:46.152 14:01:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:46.152 14:01:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:46.152 14:01:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:46.152 14:01:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:46.152 14:01:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.152 14:01:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:46.152 14:01:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.685 14:01:42 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:48.685 00:22:48.685 real 0m12.110s 00:22:48.685 user 0m35.983s 00:22:48.685 sys 0m3.753s 00:22:48.685 14:01:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:48.685 14:01:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.685 ************************************ 00:22:48.685 END TEST nvmf_fio_host 00:22:48.685 ************************************ 00:22:48.685 14:01:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:48.685 14:01:42 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:48.685 14:01:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:48.685 14:01:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:48.685 14:01:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:48.685 ************************************ 00:22:48.685 START TEST nvmf_failover 00:22:48.685 ************************************ 00:22:48.685 14:01:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:48.685 * Looking for test storage... 00:22:48.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:48.685 14:01:43 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:48.685 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:22:48.686 14:01:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:50.607 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:50.607 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:50.607 Found net devices under 0000:84:00.0: cvl_0_0 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:50.607 Found net devices under 0000:84:00.1: cvl_0_1 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:50.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:22:50.607 00:22:50.607 --- 10.0.0.2 ping statistics --- 00:22:50.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.607 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:50.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:22:50.607 00:22:50.607 --- 10.0.0.1 ping statistics --- 00:22:50.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.607 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:50.607 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.608 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:50.608 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:50.608 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.608 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:50.608 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:50.608 14:01:45 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:50.608 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:50.608 14:01:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:50.608 14:01:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:50.608 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3818966 00:22:50.608 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:50.608 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3818966 00:22:50.608 14:01:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3818966 ']' 00:22:50.608 14:01:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.608 14:01:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:50.608 14:01:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.608 14:01:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:50.608 14:01:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:50.608 [2024-07-15 14:01:45.325232] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:22:50.608 [2024-07-15 14:01:45.325306] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.608 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.608 [2024-07-15 14:01:45.388329] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:50.865 [2024-07-15 14:01:45.492341] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.865 [2024-07-15 14:01:45.492399] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.865 [2024-07-15 14:01:45.492426] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:50.865 [2024-07-15 14:01:45.492436] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:50.865 [2024-07-15 14:01:45.492445] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.865 [2024-07-15 14:01:45.492602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.865 [2024-07-15 14:01:45.492665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:50.865 [2024-07-15 14:01:45.492668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.865 14:01:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:50.865 14:01:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:50.865 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:50.865 14:01:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:50.865 14:01:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:50.865 14:01:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.865 14:01:45 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:51.124 [2024-07-15 14:01:45.857584] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.124 14:01:45 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:51.382 Malloc0 00:22:51.382 14:01:46 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:51.640 14:01:46 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:52.205 14:01:46 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:52.205 [2024-07-15 14:01:46.987919] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.205 14:01:47 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:52.463 [2024-07-15 14:01:47.248699] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:52.463 14:01:47 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:52.722 [2024-07-15 14:01:47.545610] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:52.981 14:01:47 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3819258 00:22:52.981 14:01:47 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:52.981 14:01:47 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3819258 /var/tmp/bdevperf.sock 00:22:52.981 14:01:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3819258 ']' 00:22:52.981 14:01:47 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:52.981 14:01:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.981 14:01:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:52.981 14:01:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.981 14:01:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:52.981 14:01:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:53.239 14:01:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:53.239 14:01:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:53.239 14:01:47 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:53.496 NVMe0n1 00:22:53.496 14:01:48 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:53.754 00:22:53.754 14:01:48 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3819395 00:22:53.754 14:01:48 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:53.754 14:01:48 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:55.129 14:01:49 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:55.129 [2024-07-15 14:01:49.822920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a550 is same with the state(5) to be set 00:22:55.129 14:01:49 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:58.409 14:01:52 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:58.409 00:22:58.409 14:01:53 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:58.975 14:01:53 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:02.275 14:01:56 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:02.275 [2024-07-15 14:01:56.860500] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.275 14:01:56 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:03.209 14:01:57 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:03.468 [2024-07-15 14:01:58.145117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.468 [2024-07-15 14:01:58.145194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.468 [2024-07-15 14:01:58.145224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.468 [2024-07-15 14:01:58.145236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.468 [2024-07-15 14:01:58.145248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.468 [2024-07-15 14:01:58.145259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.468 [2024-07-15 14:01:58.145271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.468 [2024-07-15 14:01:58.145283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.468 [2024-07-15 14:01:58.145294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.468 [2024-07-15 14:01:58.145305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.468 [2024-07-15 14:01:58.145317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.468 [2024-07-15 14:01:58.145328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.468 [2024-07-15 14:01:58.145340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.468 [2024-07-15 14:01:58.145351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.468 [2024-07-15 14:01:58.145362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.468 [2024-07-15 14:01:58.145374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.468 [2024-07-15 14:01:58.145385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.468 [2024-07-15 14:01:58.145396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.468 [2024-07-15 14:01:58.145408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.468 [2024-07-15 14:01:58.145419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.469 [2024-07-15 14:01:58.145431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.469 [2024-07-15 14:01:58.145442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.469 [2024-07-15 14:01:58.145454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.469 [2024-07-15 14:01:58.145466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.469 [2024-07-15 14:01:58.145478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.469 [2024-07-15 14:01:58.145490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.469 [2024-07-15 14:01:58.145510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.469 [2024-07-15 14:01:58.145523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.469 [2024-07-15 14:01:58.145535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.469 [2024-07-15 14:01:58.145546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.469 [2024-07-15 14:01:58.145558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.469 [2024-07-15 14:01:58.145569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.469 [2024-07-15 14:01:58.145581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.469 [2024-07-15 14:01:58.145592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.469 [2024-07-15 14:01:58.145604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.469 [2024-07-15 14:01:58.145615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.469 [2024-07-15 14:01:58.145627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.469 [2024-07-15 14:01:58.145638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.469 [2024-07-15 14:01:58.145650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.469 [2024-07-15 14:01:58.145661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.469 [2024-07-15 14:01:58.145672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.469 [2024-07-15 14:01:58.145684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.469 [2024-07-15 14:01:58.145695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.469 [2024-07-15 14:01:58.145707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.469 [2024-07-15 14:01:58.145733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.469 [2024-07-15 14:01:58.145756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.469 [2024-07-15 14:01:58.145769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.469 [2024-07-15 14:01:58.145781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.469 [2024-07-15 14:01:58.145793] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.498 [2024-07-15 14:01:58.145806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.498 [2024-07-15 14:01:58.145818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.498 [2024-07-15 14:01:58.145829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.498 [2024-07-15 14:01:58.145841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.498 [2024-07-15 14:01:58.145853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.498 [2024-07-15 14:01:58.145869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.498 [2024-07-15 14:01:58.145881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.498 [2024-07-15 14:01:58.145893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.498 [2024-07-15 14:01:58.145905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.498 [2024-07-15 14:01:58.145916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.498 [2024-07-15 14:01:58.145929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2654ef0 is same with the state(5) to be set 00:23:03.498 14:01:58 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3819395 00:23:10.070 0 00:23:10.070 14:02:03 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3819258 00:23:10.070 14:02:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3819258 ']' 00:23:10.070 14:02:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3819258 00:23:10.070 14:02:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:10.070 14:02:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:10.070 14:02:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3819258 00:23:10.070 14:02:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:10.070 14:02:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:10.070 14:02:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3819258' 00:23:10.070 killing process with pid 3819258 00:23:10.070 14:02:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3819258 00:23:10.070 14:02:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3819258 00:23:10.070 14:02:03 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:10.070 [2024-07-15 14:01:47.610498] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:23:10.070 [2024-07-15 14:01:47.610589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3819258 ] 00:23:10.070 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.070 [2024-07-15 14:01:47.671182] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.070 [2024-07-15 14:01:47.780264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.070 Running I/O for 15 seconds... 00:23:10.070 [2024-07-15 14:01:49.823453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.070 [2024-07-15 14:01:49.823496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.070 [2024-07-15 14:01:49.823526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.070 [2024-07-15 14:01:49.823542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.070 [2024-07-15 14:01:49.823560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.070 [2024-07-15 14:01:49.823574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.070 [2024-07-15 14:01:49.823590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.070 [2024-07-15 14:01:49.823604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.070 [2024-07-15 14:01:49.823620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.070 [2024-07-15 14:01:49.823634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.070 [2024-07-15 14:01:49.823650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.070 [2024-07-15 14:01:49.823664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.070 [2024-07-15 14:01:49.823680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.070 [2024-07-15 14:01:49.823694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.070 [2024-07-15 14:01:49.823710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.070 [2024-07-15 14:01:49.823747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.070 [2024-07-15 14:01:49.823766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.070 [2024-07-15 14:01:49.823782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.070 [2024-07-15 14:01:49.823798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.071 [2024-07-15 14:01:49.823813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.823830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.071 [2024-07-15 14:01:49.823844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.823871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.071 [2024-07-15 14:01:49.823888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.823904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.071 [2024-07-15 14:01:49.823919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.823935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.071 [2024-07-15 14:01:49.823949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.823965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.071 [2024-07-15 14:01:49.823979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.823995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.071 [2024-07-15 14:01:49.824010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.071 [2024-07-15 14:01:49.824056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.071 [2024-07-15 14:01:49.824086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.071 [2024-07-15 14:01:49.824114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.071 [2024-07-15 14:01:49.824144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.071 [2024-07-15 14:01:49.824173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.071 [2024-07-15 14:01:49.824204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.071 [2024-07-15 14:01:49.824234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.071 [2024-07-15 14:01:49.824267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.071 [2024-07-15 14:01:49.824298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.071 [2024-07-15 14:01:49.824326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.071 [2024-07-15 14:01:49.824354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.071 [2024-07-15 14:01:49.824383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.071 [2024-07-15 14:01:49.824411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.071 [2024-07-15 14:01:49.824440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.071 [2024-07-15 14:01:49.824468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.071 [2024-07-15 14:01:49.824497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.071 [2024-07-15 14:01:49.824526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.071 [2024-07-15 14:01:49.824554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.071 [2024-07-15 14:01:49.824582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.071 [2024-07-15 14:01:49.824611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.071 [2024-07-15 14:01:49.824645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.071 [2024-07-15 14:01:49.824674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.071 [2024-07-15 14:01:49.824704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.071 [2024-07-15 14:01:49.824756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.071 [2024-07-15 14:01:49.824788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.071 [2024-07-15 14:01:49.824818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.071 [2024-07-15 14:01:49.824848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.071 [2024-07-15 14:01:49.824877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.071 [2024-07-15 14:01:49.824907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.071 [2024-07-15 14:01:49.824936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.071 [2024-07-15 14:01:49.824979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.071 [2024-07-15 14:01:49.824999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.072 [2024-07-15 14:01:49.825014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.072 [2024-07-15 14:01:49.825049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.072 [2024-07-15 14:01:49.825081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.072 [2024-07-15 14:01:49.825110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.072 [2024-07-15 14:01:49.825139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.072 [2024-07-15 14:01:49.825169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.072 [2024-07-15 14:01:49.825199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.072 [2024-07-15 14:01:49.825229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.072 [2024-07-15 14:01:49.825258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.072 [2024-07-15 14:01:49.825288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.072 [2024-07-15 14:01:49.825317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.072 [2024-07-15 14:01:49.825346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.072 [2024-07-15 14:01:49.825376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.072 [2024-07-15 14:01:49.825405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.072 [2024-07-15 14:01:49.825439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.072 [2024-07-15 14:01:49.825470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.072 [2024-07-15 14:01:49.825500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.072 [2024-07-15 14:01:49.825530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.072 [2024-07-15 14:01:49.825559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.072 [2024-07-15 14:01:49.825589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.072 [2024-07-15 14:01:49.825618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.072 [2024-07-15 14:01:49.825648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.072 [2024-07-15 14:01:49.825678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.072 [2024-07-15 14:01:49.825708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.072 [2024-07-15 14:01:49.825746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.072 [2024-07-15 14:01:49.825779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.072 [2024-07-15 14:01:49.825809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.072 [2024-07-15 14:01:49.825843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.072 [2024-07-15 14:01:49.825873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.072 [2024-07-15 14:01:49.825903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.072 [2024-07-15 14:01:49.825933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.072 [2024-07-15 14:01:49.825963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.825978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.072 [2024-07-15 14:01:49.825991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.826007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.072 [2024-07-15 14:01:49.826021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.826037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.072 [2024-07-15 14:01:49.826050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.826066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.072 [2024-07-15 14:01:49.826079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.826095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.072 [2024-07-15 14:01:49.826109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.072 [2024-07-15 14:01:49.826124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.072 [2024-07-15 14:01:49.826138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.826154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.826174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.826190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.826208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.826225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.826239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.826255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.826269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.826284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.826298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.826313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.826327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.826343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.826357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.826372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.826387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.826402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.826416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.826431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.826445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.826460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.826474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.826489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.826503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.826518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.826532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.826547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.826562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.826581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.826596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.826611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.826625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.826641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.826661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.826677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.826691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.826706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.826720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.826735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.826758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.826774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.826788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.826804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.826819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.826835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.826848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.826863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.826877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.826893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.826907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.826922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.826936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.826951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.826969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.826985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.826999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.827015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.827028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.827044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.827058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.827073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.827087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.827102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.827115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.827131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.827150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.827166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.827180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.827195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.827209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.827224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.827238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.827253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.827266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.827281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.827295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.827310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.827324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.827339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.827356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.827372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.073 [2024-07-15 14:01:49.827386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.827401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.073 [2024-07-15 14:01:49.827415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.073 [2024-07-15 14:01:49.827429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b4c40 is same with the state(5) to be set 00:23:10.074 [2024-07-15 14:01:49.827446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.074 [2024-07-15 14:01:49.827458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.074 [2024-07-15 14:01:49.827469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84832 len:8 PRP1 0x0 PRP2 0x0 00:23:10.074 [2024-07-15 14:01:49.827482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.074 [2024-07-15 14:01:49.827545] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6b4c40 was disconnected and freed. reset controller. 00:23:10.074 [2024-07-15 14:01:49.827564] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:10.074 [2024-07-15 14:01:49.827600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.074 [2024-07-15 14:01:49.827618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.074 [2024-07-15 14:01:49.827633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.074 [2024-07-15 14:01:49.827647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.074 [2024-07-15 14:01:49.827660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.074 [2024-07-15 14:01:49.827674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.074 [2024-07-15 14:01:49.827694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.074 [2024-07-15 14:01:49.827707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.074 [2024-07-15 14:01:49.827720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.074 [2024-07-15 14:01:49.827789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68e790 (9): Bad file descriptor 00:23:10.074 [2024-07-15 14:01:49.831039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.074 [2024-07-15 14:01:49.989191] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:10.074 [2024-07-15 14:01:53.547580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:127464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.074 [2024-07-15 14:01:53.547662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.074 [2024-07-15 14:01:53.547707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:127472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.074 [2024-07-15 14:01:53.547760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.074 [2024-07-15 14:01:53.547779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:127480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.074 [2024-07-15 14:01:53.547795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.074 [2024-07-15 14:01:53.547810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:127488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.074 [2024-07-15 14:01:53.547826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.074 [2024-07-15 14:01:53.547841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.074 [2024-07-15 14:01:53.547856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.074 [2024-07-15 14:01:53.547872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:127504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.074 [2024-07-15 14:01:53.547886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.074 [2024-07-15 14:01:53.547902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:127512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.074 [2024-07-15 14:01:53.547916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.074 [2024-07-15 14:01:53.547931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:127520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.074 [2024-07-15 14:01:53.547945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.074 [2024-07-15 14:01:53.547961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:127528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.074 [2024-07-15 14:01:53.547975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.074 [2024-07-15 14:01:53.547990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:127536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.074 [2024-07-15 14:01:53.548004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.074 [2024-07-15 14:01:53.548019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:127544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.074 [2024-07-15 14:01:53.548034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.074 [2024-07-15 14:01:53.548063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:127552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.074 [2024-07-15 14:01:53.548078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.074 [2024-07-15 14:01:53.548093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:127560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.074 [2024-07-15 14:01:53.548107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.074 [2024-07-15 14:01:53.548122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:127568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.074 [2024-07-15 14:01:53.548136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.074 [2024-07-15 14:01:53.548155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:127576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.074 [2024-07-15 14:01:53.548169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.074 [2024-07-15 14:01:53.548184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:127584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.074 [2024-07-15 14:01:53.548198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.074 [2024-07-15 14:01:53.548213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:127592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.074 [2024-07-15 14:01:53.548227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.074 [2024-07-15 14:01:53.548243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:127600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.074 [2024-07-15 14:01:53.548257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.074 [2024-07-15 14:01:53.548273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:127608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.074 [2024-07-15 14:01:53.548287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.074 [2024-07-15 14:01:53.548302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:127616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.074 [2024-07-15 14:01:53.548316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.074 [2024-07-15 14:01:53.548331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:127624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.074 [2024-07-15 14:01:53.548344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.074 [2024-07-15 14:01:53.548359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:127632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.074 [2024-07-15 14:01:53.548372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.074 [2024-07-15 14:01:53.548388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:127640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.074 [2024-07-15 14:01:53.548401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.074 [2024-07-15 14:01:53.548417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:127648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.074 [2024-07-15 14:01:53.548431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.548446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:127656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.548460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.548475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.548488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.548503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:127672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.548521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.548537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:127680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.548551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.548566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:127688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.548580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.548596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:127696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.548610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.548625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:127704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.548639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.548653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:127712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.548666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.548681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:127720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.548694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.548709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:127728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.548744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.548762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:127736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.548777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.548793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:127744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.548807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.548822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:127752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.548836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.548852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.548865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.548881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.548894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.548913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.548928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.548944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:127784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.548957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.548973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:127792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.548987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.549002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:127800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.549016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.549031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:127808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.549060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.549075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:127816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.549089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.549104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:127824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.549117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.549132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:127832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.549145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.549160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:127840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.549174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.549189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:127848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.549202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.549218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:127856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.549231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.549247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:127864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.549260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.549275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:127872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.549292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.549308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:127880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.549321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.549336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:127888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.549350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.549364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:127896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.549378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.549392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:127904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.549406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.549421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:127912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.549434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.075 [2024-07-15 14:01:53.549448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:127920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-15 14:01:53.549462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.549477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:127928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.076 [2024-07-15 14:01:53.549491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.549506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.076 [2024-07-15 14:01:53.549519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.549534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:127944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.076 [2024-07-15 14:01:53.549547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.549562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.076 [2024-07-15 14:01:53.549575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.549590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:127960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.076 [2024-07-15 14:01:53.549604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.549618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:127968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.076 [2024-07-15 14:01:53.549632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.549650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:127976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.076 [2024-07-15 14:01:53.549664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.549679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:127984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.076 [2024-07-15 14:01:53.549693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.549709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.549746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.549780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.549797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.549813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.549827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.549842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.549856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.549871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.549886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.549902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.549916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.549931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:128056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.549945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.549960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.549974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.549989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.550003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.550019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.550033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.550064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.550078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.550097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.550111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.550126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.550140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.550154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.550168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.550183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.550196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.550211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.550225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.550240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.550254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.550268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.550282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.550313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.550328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.550343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.550357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.550372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.550386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.550402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:128176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.550416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.550431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.550445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.550460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.550477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.550494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:128200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.550508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.550523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.550537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.550560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.550575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.550590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.550604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.550619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.550633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.550648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.550662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.550678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.550691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.550707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.550721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.076 [2024-07-15 14:01:53.550736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.076 [2024-07-15 14:01:53.550759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.550774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.077 [2024-07-15 14:01:53.550788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.550804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.077 [2024-07-15 14:01:53.550818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.550833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.077 [2024-07-15 14:01:53.550847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.550866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.077 [2024-07-15 14:01:53.550881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.550897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.077 [2024-07-15 14:01:53.550911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.550926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:128312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.077 [2024-07-15 14:01:53.550940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.550956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.077 [2024-07-15 14:01:53.550970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.550985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.077 [2024-07-15 14:01:53.550999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.551014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.077 [2024-07-15 14:01:53.551027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.551049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.077 [2024-07-15 14:01:53.551064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.551079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.077 [2024-07-15 14:01:53.551092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.551107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.077 [2024-07-15 14:01:53.551121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.551137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.077 [2024-07-15 14:01:53.551150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.551165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.077 [2024-07-15 14:01:53.551179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.551195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.077 [2024-07-15 14:01:53.551209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.551250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.077 [2024-07-15 14:01:53.551272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128392 len:8 PRP1 0x0 PRP2 0x0 00:23:10.077 [2024-07-15 14:01:53.551287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.551305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.077 [2024-07-15 14:01:53.551317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.077 [2024-07-15 14:01:53.551328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128400 len:8 PRP1 0x0 PRP2 0x0 00:23:10.077 [2024-07-15 14:01:53.551341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.551354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.077 [2024-07-15 14:01:53.551365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.077 [2024-07-15 14:01:53.551377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128408 len:8 PRP1 0x0 PRP2 0x0 00:23:10.077 [2024-07-15 14:01:53.551389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.551402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.077 [2024-07-15 14:01:53.551413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.077 [2024-07-15 14:01:53.551425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128416 len:8 PRP1 0x0 PRP2 0x0 00:23:10.077 [2024-07-15 14:01:53.551438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.551451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.077 [2024-07-15 14:01:53.551462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.077 [2024-07-15 14:01:53.551473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128424 len:8 PRP1 0x0 PRP2 0x0 00:23:10.077 [2024-07-15 14:01:53.551486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.551499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.077 [2024-07-15 14:01:53.551516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.077 [2024-07-15 14:01:53.551528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128432 len:8 PRP1 0x0 PRP2 0x0 00:23:10.077 [2024-07-15 14:01:53.551541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.551554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.077 [2024-07-15 14:01:53.551566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.077 [2024-07-15 14:01:53.551577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128440 len:8 PRP1 0x0 PRP2 0x0 00:23:10.077 [2024-07-15 14:01:53.551590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.551603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.077 [2024-07-15 14:01:53.551614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.077 [2024-07-15 14:01:53.551625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128448 len:8 PRP1 0x0 PRP2 0x0 00:23:10.077 [2024-07-15 14:01:53.551638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.551652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.077 [2024-07-15 14:01:53.551667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.077 [2024-07-15 14:01:53.551679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128456 len:8 PRP1 0x0 PRP2 0x0 00:23:10.077 [2024-07-15 14:01:53.551692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.551706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.077 [2024-07-15 14:01:53.551716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.077 [2024-07-15 14:01:53.551728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128464 len:8 PRP1 0x0 PRP2 0x0 00:23:10.077 [2024-07-15 14:01:53.551747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.551763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.077 [2024-07-15 14:01:53.551774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.077 [2024-07-15 14:01:53.551786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128472 len:8 PRP1 0x0 PRP2 0x0 00:23:10.077 [2024-07-15 14:01:53.551799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.551812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.077 [2024-07-15 14:01:53.551823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.077 [2024-07-15 14:01:53.551834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128480 len:8 PRP1 0x0 PRP2 0x0 00:23:10.077 [2024-07-15 14:01:53.551847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.551859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.077 [2024-07-15 14:01:53.551870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.077 [2024-07-15 14:01:53.551882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127992 len:8 PRP1 0x0 PRP2 0x0 00:23:10.077 [2024-07-15 14:01:53.551895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.551908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.077 [2024-07-15 14:01:53.551925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.077 [2024-07-15 14:01:53.551936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128000 len:8 PRP1 0x0 PRP2 0x0 00:23:10.077 [2024-07-15 14:01:53.551949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.552014] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x859500 was disconnected and freed. reset controller. 00:23:10.077 [2024-07-15 14:01:53.552032] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:10.077 [2024-07-15 14:01:53.552068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.077 [2024-07-15 14:01:53.552086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.552102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.077 [2024-07-15 14:01:53.552116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.552130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.077 [2024-07-15 14:01:53.552155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.077 [2024-07-15 14:01:53.552170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.078 [2024-07-15 14:01:53.552184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:53.552197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.078 [2024-07-15 14:01:53.552247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68e790 (9): Bad file descriptor 00:23:10.078 [2024-07-15 14:01:53.555482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.078 [2024-07-15 14:01:53.631966] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:10.078 [2024-07-15 14:01:58.146107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.078 [2024-07-15 14:01:58.146149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.146167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.078 [2024-07-15 14:01:58.146181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.146195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.078 [2024-07-15 14:01:58.146208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.146222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.078 [2024-07-15 14:01:58.146235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.146249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68e790 is same with the state(5) to be set 00:23:10.078 [2024-07-15 14:01:58.146332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.146354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.146380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.146396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.146412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.146426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.146441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.146455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.146470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.146484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.146505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.146520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.146535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.146549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.146564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.146577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.146593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.146606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.146621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.146634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.146650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.146663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.146678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.146692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.146708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.146746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.146765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.146780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.146796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.146811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.146826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.146841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.146856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.146870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.146886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.146904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.146920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.146934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.146950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.146964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.146979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.146994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.147009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.147023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.147040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.147054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.147069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.147098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.147113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.147127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.147158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.147173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.147188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.147202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.147217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.147231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.147247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.147261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.147276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.147290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.147310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.147324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.147340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.147354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.147369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.147383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.147399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.147412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.078 [2024-07-15 14:01:58.147427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.078 [2024-07-15 14:01:58.147441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.079 [2024-07-15 14:01:58.147457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.079 [2024-07-15 14:01:58.147471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.079 [2024-07-15 14:01:58.147486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.079 [2024-07-15 14:01:58.147500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.079 [2024-07-15 14:01:58.147515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.079 [2024-07-15 14:01:58.147529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.079 [2024-07-15 14:01:58.147544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.079 [2024-07-15 14:01:58.147573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.079 [2024-07-15 14:01:58.147589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.079 [2024-07-15 14:01:58.147602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.079 [2024-07-15 14:01:58.147633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.079 [2024-07-15 14:01:58.147648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.079 [2024-07-15 14:01:58.147663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.079 [2024-07-15 14:01:58.147677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.079 [2024-07-15 14:01:58.147692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.079 [2024-07-15 14:01:58.147706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.079 [2024-07-15 14:01:58.147729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.079 [2024-07-15 14:01:58.147751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.079 [2024-07-15 14:01:58.147768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.079 [2024-07-15 14:01:58.147782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.079 [2024-07-15 14:01:58.147798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.079 [2024-07-15 14:01:58.147812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.079 [2024-07-15 14:01:58.147828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.079 [2024-07-15 14:01:58.147842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.079 [2024-07-15 14:01:58.147857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.079 [2024-07-15 14:01:58.147871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.079 [2024-07-15 14:01:58.147886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.079 [2024-07-15 14:01:58.147900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.079 [2024-07-15 14:01:58.147915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.079 [2024-07-15 14:01:58.147929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.079 [2024-07-15 14:01:58.147944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.079 [2024-07-15 14:01:58.147958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.079 [2024-07-15 14:01:58.147973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.079 [2024-07-15 14:01:58.147987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.079 [2024-07-15 14:01:58.148003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.079 [2024-07-15 14:01:58.148017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.079 [2024-07-15 14:01:58.148033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.079 [2024-07-15 14:01:58.148046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.079 [2024-07-15 14:01:58.148062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.079 [2024-07-15 14:01:58.148076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.079 [2024-07-15 14:01:58.148091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.148109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.148124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.148139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.148170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.148184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.148198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.148212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.148226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.148240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.148255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.148269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.148284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.148297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.148312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.148326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.148341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.148354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.148385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.148399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.148422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.148437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.148452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.148466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.148482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.148496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.148516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.148531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.148546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.148560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.148575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.148590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.148605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.148619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.148635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.148649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.148664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.148677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.148692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.148706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.148721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.148735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.148762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.148777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.148793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.148807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.148822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.148835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.148851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.148865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.148880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.148897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.148919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.148934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.148949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.148962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.148978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.148992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.149007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.149021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.149037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.149051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.149066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.149080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.149095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.149109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.080 [2024-07-15 14:01:58.149124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.080 [2024-07-15 14:01:58.149137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.149153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.149166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.149181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.149195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.149210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.149225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.149246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.149261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.149277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.149295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.149311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.149325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.149340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.149354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.149370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.149383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.149405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.149419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.149435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.149449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.149464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.149478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.149494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.149507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.149522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.149536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.149551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.149565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.149580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.149594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.149609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.149623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.149639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.149652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.149671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.149686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.149702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.149716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.149732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.149755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.149771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.149786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.149801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.149815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.149830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.149844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.149859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.149874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.149889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.149903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.149918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.149932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.149948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.149961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.149976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.149992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.150008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.150022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.150038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.150056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.150072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.150087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.150102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.150117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.150133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.150147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.150163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.150177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.150192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.150206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.150222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.150237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.150252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.150266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.150282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.081 [2024-07-15 14:01:58.150296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.150341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.081 [2024-07-15 14:01:58.150357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.081 [2024-07-15 14:01:58.150370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82832 len:8 PRP1 0x0 PRP2 0x0 00:23:10.081 [2024-07-15 14:01:58.150383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.081 [2024-07-15 14:01:58.150447] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8592f0 was disconnected and freed. reset controller. 00:23:10.081 [2024-07-15 14:01:58.150466] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:10.081 [2024-07-15 14:01:58.150481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.081 [2024-07-15 14:01:58.153715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.081 [2024-07-15 14:01:58.153762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68e790 (9): Bad file descriptor 00:23:10.081 [2024-07-15 14:01:58.192537] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:10.081 00:23:10.082 Latency(us) 00:23:10.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.082 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:10.082 Verification LBA range: start 0x0 length 0x4000 00:23:10.082 NVMe0n1 : 15.01 8810.27 34.42 711.22 0.00 13417.10 552.20 14757.74 00:23:10.082 =================================================================================================================== 00:23:10.082 Total : 8810.27 34.42 711.22 0.00 13417.10 552.20 14757.74 00:23:10.082 Received shutdown signal, test time was about 15.000000 seconds 00:23:10.082 00:23:10.082 Latency(us) 00:23:10.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.082 =================================================================================================================== 00:23:10.082 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:10.082 14:02:04 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:10.082 14:02:04 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:10.082 14:02:04 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:10.082 14:02:04 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3821117 00:23:10.082 14:02:04 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:10.082 14:02:04 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3821117 /var/tmp/bdevperf.sock 00:23:10.082 14:02:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3821117 ']' 00:23:10.082 14:02:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.082 14:02:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:10.082 14:02:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.082 14:02:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:10.082 14:02:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:10.082 14:02:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:10.082 14:02:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:10.082 14:02:04 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:10.082 [2024-07-15 14:02:04.560794] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:10.082 14:02:04 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:10.082 [2024-07-15 14:02:04.801389] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:10.082 14:02:04 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:10.340 NVMe0n1 00:23:10.340 14:02:05 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:10.907 00:23:10.907 14:02:05 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:11.165 00:23:11.165 14:02:05 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:11.165 14:02:05 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:11.422 14:02:06 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:11.682 14:02:06 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:14.969 14:02:09 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:14.969 14:02:09 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:14.969 14:02:09 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3821848 00:23:14.969 14:02:09 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:14.969 14:02:09 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3821848 00:23:16.340 0 00:23:16.340 14:02:10 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:16.340 [2024-07-15 14:02:04.053318] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:23:16.340 [2024-07-15 14:02:04.053417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3821117 ] 00:23:16.340 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.340 [2024-07-15 14:02:04.117460] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.340 [2024-07-15 14:02:04.224220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.340 [2024-07-15 14:02:06.398321] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:16.340 [2024-07-15 14:02:06.398429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.340 [2024-07-15 14:02:06.398452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.340 [2024-07-15 14:02:06.398471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.340 [2024-07-15 14:02:06.398485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.340 [2024-07-15 14:02:06.398499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.340 [2024-07-15 14:02:06.398513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.340 [2024-07-15 14:02:06.398527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.340 [2024-07-15 14:02:06.398541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.340 [2024-07-15 14:02:06.398563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:16.340 [2024-07-15 14:02:06.398616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:16.340 [2024-07-15 14:02:06.398649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b2790 (9): Bad file descriptor 00:23:16.340 [2024-07-15 14:02:06.409230] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:16.340 Running I/O for 1 seconds... 00:23:16.340 00:23:16.340 Latency(us) 00:23:16.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.340 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:16.340 Verification LBA range: start 0x0 length 0x4000 00:23:16.340 NVMe0n1 : 1.00 8842.29 34.54 0.00 0.00 14407.94 673.56 16117.00 00:23:16.340 =================================================================================================================== 00:23:16.340 Total : 8842.29 34.54 0.00 0.00 14407.94 673.56 16117.00 00:23:16.340 14:02:10 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:16.340 14:02:10 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:16.340 14:02:11 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:16.598 14:02:11 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:16.598 14:02:11 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:16.856 14:02:11 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:17.133 14:02:11 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:20.454 14:02:14 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:20.454 14:02:14 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:20.454 14:02:15 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3821117 00:23:20.454 14:02:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3821117 ']' 00:23:20.454 14:02:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3821117 00:23:20.454 14:02:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:20.454 14:02:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:20.454 14:02:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3821117 00:23:20.454 14:02:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:20.454 14:02:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:20.454 14:02:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3821117' 00:23:20.454 killing process with pid 3821117 00:23:20.454 14:02:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3821117 00:23:20.454 14:02:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3821117 00:23:20.711 14:02:15 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:20.711 14:02:15 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:20.968 14:02:15 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:20.968 14:02:15 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:20.968 14:02:15 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:20.968 14:02:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:20.968 14:02:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:23:20.968 14:02:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:20.968 14:02:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:23:20.968 14:02:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:20.968 14:02:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:20.968 rmmod nvme_tcp 00:23:20.968 rmmod nvme_fabrics 00:23:20.968 rmmod nvme_keyring 00:23:20.968 14:02:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:20.968 14:02:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:23:20.968 14:02:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:23:20.968 14:02:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3818966 ']' 00:23:20.968 14:02:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3818966 00:23:20.968 14:02:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3818966 ']' 00:23:20.968 14:02:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3818966 00:23:20.968 14:02:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:20.968 14:02:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:20.968 14:02:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3818966 00:23:20.968 14:02:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:20.968 14:02:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:20.968 14:02:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3818966' 00:23:20.968 killing process with pid 3818966 00:23:20.968 14:02:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3818966 00:23:20.968 14:02:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3818966 00:23:21.535 14:02:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:21.535 14:02:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:21.535 14:02:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:21.535 14:02:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:21.535 14:02:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:21.535 14:02:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.535 14:02:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:21.535 14:02:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.437 14:02:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:23.437 00:23:23.437 real 0m35.119s 00:23:23.437 user 2m3.554s 00:23:23.437 sys 0m6.187s 00:23:23.437 14:02:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:23.437 14:02:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:23.437 ************************************ 00:23:23.437 END TEST nvmf_failover 00:23:23.437 ************************************ 00:23:23.437 14:02:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:23.437 14:02:18 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:23.437 14:02:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:23.437 14:02:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:23.437 14:02:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:23.437 ************************************ 00:23:23.437 START TEST nvmf_host_discovery 00:23:23.437 ************************************ 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:23.437 * Looking for test storage... 00:23:23.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.437 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:23.438 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:23.438 14:02:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:23:23.438 14:02:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:25.972 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:25.972 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:25.972 Found net devices under 0000:84:00.0: cvl_0_0 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:25.972 Found net devices under 0000:84:00.1: cvl_0_1 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:25.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:23:25.972 00:23:25.972 --- 10.0.0.2 ping statistics --- 00:23:25.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.972 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:23:25.972 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:23:25.972 00:23:25.972 --- 10.0.0.1 ping statistics --- 00:23:25.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.972 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:23:25.973 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.973 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:23:25.973 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:25.973 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.973 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:25.973 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:25.973 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.973 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:25.973 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:25.973 14:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:25.973 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:25.973 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:25.973 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.973 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3824520 00:23:25.973 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:25.973 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3824520 00:23:25.973 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3824520 ']' 00:23:25.973 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.973 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:25.973 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.973 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:25.973 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.973 [2024-07-15 14:02:20.573195] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:23:25.973 [2024-07-15 14:02:20.573293] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.973 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.973 [2024-07-15 14:02:20.638116] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.973 [2024-07-15 14:02:20.742706] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.973 [2024-07-15 14:02:20.742786] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.973 [2024-07-15 14:02:20.742800] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.973 [2024-07-15 14:02:20.742826] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.973 [2024-07-15 14:02:20.742836] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.973 [2024-07-15 14:02:20.742870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.231 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:26.231 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.232 [2024-07-15 14:02:20.871412] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.232 [2024-07-15 14:02:20.879536] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.232 null0 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.232 null1 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3824552 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3824552 /tmp/host.sock 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3824552 ']' 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:26.232 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:26.232 14:02:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.232 [2024-07-15 14:02:20.949204] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:23:26.232 [2024-07-15 14:02:20.949287] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3824552 ] 00:23:26.232 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.232 [2024-07-15 14:02:21.005844] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.490 [2024-07-15 14:02:21.126365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.490 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:26.490 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:26.490 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:26.490 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:26.490 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.491 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.491 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.491 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:26.491 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.491 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.491 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.491 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:26.491 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:26.491 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:26.491 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.491 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.491 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:26.491 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:26.491 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:26.491 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.491 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:26.491 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:26.491 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.491 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:26.491 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.491 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.491 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:26.491 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:26.491 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.491 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:26.491 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:26.491 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.491 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.749 [2024-07-15 14:02:21.513280] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:26.749 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.750 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:26.750 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.750 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:26.750 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.750 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:26.750 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:23:27.051 14:02:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:27.618 [2024-07-15 14:02:22.299455] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:27.618 [2024-07-15 14:02:22.299478] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:27.618 [2024-07-15 14:02:22.299498] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:27.618 [2024-07-15 14:02:22.426927] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:27.875 [2024-07-15 14:02:22.530240] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:27.875 [2024-07-15 14:02:22.530262] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:27.875 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:27.875 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:27.875 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:27.875 14:02:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:27.875 14:02:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:27.875 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.875 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.875 14:02:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:27.875 14:02:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:27.875 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.132 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.132 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:28.132 14:02:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:28.132 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:28.132 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:28.132 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:28.132 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:28.132 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:28.132 14:02:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.132 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.132 14:02:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:28.132 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:28.133 14:02:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.391 [2024-07-15 14:02:23.133867] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:28.391 [2024-07-15 14:02:23.134156] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:28.391 [2024-07-15 14:02:23.134186] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:28.391 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.651 [2024-07-15 14:02:23.260957] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:28.651 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:28.651 14:02:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:28.651 [2024-07-15 14:02:23.363545] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:28.651 [2024-07-15 14:02:23.363565] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:28.651 [2024-07-15 14:02:23.363574] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:29.587 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:29.587 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:29.587 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:29.587 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:29.587 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:29.587 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.587 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.587 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:29.587 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:29.587 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.587 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:29.587 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:29.587 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:29.587 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:29.587 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:29.587 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:29.587 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:29.587 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.588 [2024-07-15 14:02:24.361817] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:29.588 [2024-07-15 14:02:24.361854] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:29.588 [2024-07-15 14:02:24.368164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.588 [2024-07-15 14:02:24.368199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.588 [2024-07-15 14:02:24.368232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.588 [2024-07-15 14:02:24.368245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.588 [2024-07-15 14:02:24.368259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.588 [2024-07-15 14:02:24.368272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.588 [2024-07-15 14:02:24.368286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.588 [2024-07-15 14:02:24.368298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.588 [2024-07-15 14:02:24.368310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f210 is same with the state(5) to be set 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:29.588 [2024-07-15 14:02:24.378171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f210 (9): Bad file descriptor 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.588 [2024-07-15 14:02:24.388213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:29.588 [2024-07-15 14:02:24.388470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.588 [2024-07-15 14:02:24.388498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f210 with addr=10.0.0.2, port=4420 00:23:29.588 [2024-07-15 14:02:24.388514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f210 is same with the state(5) to be set 00:23:29.588 [2024-07-15 14:02:24.388536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f210 (9): Bad file descriptor 00:23:29.588 [2024-07-15 14:02:24.388570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:29.588 [2024-07-15 14:02:24.388587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:29.588 [2024-07-15 14:02:24.388603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:29.588 [2024-07-15 14:02:24.388623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.588 [2024-07-15 14:02:24.398309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:29.588 [2024-07-15 14:02:24.398504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.588 [2024-07-15 14:02:24.398530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f210 with addr=10.0.0.2, port=4420 00:23:29.588 [2024-07-15 14:02:24.398545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f210 is same with the state(5) to be set 00:23:29.588 [2024-07-15 14:02:24.398567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f210 (9): Bad file descriptor 00:23:29.588 [2024-07-15 14:02:24.398587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:29.588 [2024-07-15 14:02:24.398599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:29.588 [2024-07-15 14:02:24.398612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:29.588 [2024-07-15 14:02:24.398631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.588 [2024-07-15 14:02:24.408392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:29.588 [2024-07-15 14:02:24.408677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.588 [2024-07-15 14:02:24.408705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f210 with addr=10.0.0.2, port=4420 00:23:29.588 [2024-07-15 14:02:24.408734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f210 is same with the state(5) to be set 00:23:29.588 [2024-07-15 14:02:24.408766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f210 (9): Bad file descriptor 00:23:29.588 [2024-07-15 14:02:24.408807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:29.588 [2024-07-15 14:02:24.408824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:29.588 [2024-07-15 14:02:24.408837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:29.588 [2024-07-15 14:02:24.408855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:29.588 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:29.588 [2024-07-15 14:02:24.418478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:29.588 [2024-07-15 14:02:24.418664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.588 [2024-07-15 14:02:24.418690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f210 with addr=10.0.0.2, port=4420 00:23:29.588 [2024-07-15 14:02:24.418705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f210 is same with the state(5) to be set 00:23:29.588 [2024-07-15 14:02:24.418751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f210 (9): Bad file descriptor 00:23:29.588 [2024-07-15 14:02:24.418774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:29.588 [2024-07-15 14:02:24.418809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:29.588 [2024-07-15 14:02:24.418822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:29.588 [2024-07-15 14:02:24.418841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.847 [2024-07-15 14:02:24.428560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:29.847 [2024-07-15 14:02:24.428763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.847 [2024-07-15 14:02:24.428800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f210 with addr=10.0.0.2, port=4420 00:23:29.847 [2024-07-15 14:02:24.428817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f210 is same with the state(5) to be set 00:23:29.848 [2024-07-15 14:02:24.428853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f210 (9): Bad file descriptor 00:23:29.848 [2024-07-15 14:02:24.428891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:29.848 [2024-07-15 14:02:24.428908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:29.848 [2024-07-15 14:02:24.428922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:29.848 [2024-07-15 14:02:24.428941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.848 [2024-07-15 14:02:24.438648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:29.848 [2024-07-15 14:02:24.438848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.848 [2024-07-15 14:02:24.438877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f210 with addr=10.0.0.2, port=4420 00:23:29.848 [2024-07-15 14:02:24.438900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f210 is same with the state(5) to be set 00:23:29.848 [2024-07-15 14:02:24.438935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f210 (9): Bad file descriptor 00:23:29.848 [2024-07-15 14:02:24.438959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:29.848 [2024-07-15 14:02:24.438973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:29.848 [2024-07-15 14:02:24.438986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:29.848 [2024-07-15 14:02:24.439004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.848 [2024-07-15 14:02:24.448751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:29.848 [2024-07-15 14:02:24.448927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.848 [2024-07-15 14:02:24.448954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0f210 with addr=10.0.0.2, port=4420 00:23:29.848 [2024-07-15 14:02:24.448970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0f210 is same with the state(5) to be set 00:23:29.848 [2024-07-15 14:02:24.449032] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:29.848 [2024-07-15 14:02:24.449058] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:29.848 [2024-07-15 14:02:24.449094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0f210 (9): Bad file descriptor 00:23:29.848 [2024-07-15 14:02:24.449136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:29.848 [2024-07-15 14:02:24.449154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:29.848 [2024-07-15 14:02:24.449168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:29.848 [2024-07-15 14:02:24.449185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:29.848 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:29.849 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.849 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:29.849 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.849 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.849 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:29.849 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:29.849 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.849 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:29.849 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:29.849 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:29.849 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:29.849 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:29.849 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:29.849 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:29.849 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:29.849 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:29.849 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:29.849 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:29.849 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:29.849 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.849 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.849 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.849 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:29.849 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:29.849 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:29.849 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:29.849 14:02:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:29.849 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.849 14:02:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.228 [2024-07-15 14:02:25.739341] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:31.228 [2024-07-15 14:02:25.739370] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:31.228 [2024-07-15 14:02:25.739391] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:31.228 [2024-07-15 14:02:25.866807] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:31.486 [2024-07-15 14:02:26.137511] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:31.486 [2024-07-15 14:02:26.137566] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.486 request: 00:23:31.486 { 00:23:31.486 "name": "nvme", 00:23:31.486 "trtype": "tcp", 00:23:31.486 "traddr": "10.0.0.2", 00:23:31.486 "adrfam": "ipv4", 00:23:31.486 "trsvcid": "8009", 00:23:31.486 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:31.486 "wait_for_attach": true, 00:23:31.486 "method": "bdev_nvme_start_discovery", 00:23:31.486 "req_id": 1 00:23:31.486 } 00:23:31.486 Got JSON-RPC error response 00:23:31.486 response: 00:23:31.486 { 00:23:31.486 "code": -17, 00:23:31.486 "message": "File exists" 00:23:31.486 } 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.486 request: 00:23:31.486 { 00:23:31.486 "name": "nvme_second", 00:23:31.486 "trtype": "tcp", 00:23:31.486 "traddr": "10.0.0.2", 00:23:31.486 "adrfam": "ipv4", 00:23:31.486 "trsvcid": "8009", 00:23:31.486 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:31.486 "wait_for_attach": true, 00:23:31.486 "method": "bdev_nvme_start_discovery", 00:23:31.486 "req_id": 1 00:23:31.486 } 00:23:31.486 Got JSON-RPC error response 00:23:31.486 response: 00:23:31.486 { 00:23:31.486 "code": -17, 00:23:31.486 "message": "File exists" 00:23:31.486 } 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:31.486 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:31.487 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:31.487 14:02:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:31.487 14:02:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:31.487 14:02:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:31.487 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.487 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.487 14:02:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:31.487 14:02:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:31.487 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.487 14:02:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:31.487 14:02:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:31.487 14:02:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:31.487 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.487 14:02:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:31.487 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.487 14:02:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:31.487 14:02:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:31.487 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.746 14:02:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:31.746 14:02:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:31.746 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:31.746 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:31.746 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:31.746 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:31.746 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:31.746 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:31.746 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:31.746 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.746 14:02:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:32.683 [2024-07-15 14:02:27.349145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.683 [2024-07-15 14:02:27.349222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff0b40 with addr=10.0.0.2, port=8010 00:23:32.683 [2024-07-15 14:02:27.349253] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:32.683 [2024-07-15 14:02:27.349267] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:32.683 [2024-07-15 14:02:27.349279] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:33.619 [2024-07-15 14:02:28.351573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.619 [2024-07-15 14:02:28.351656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff0b40 with addr=10.0.0.2, port=8010 00:23:33.619 [2024-07-15 14:02:28.351687] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:33.619 [2024-07-15 14:02:28.351700] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:33.619 [2024-07-15 14:02:28.351713] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:34.571 [2024-07-15 14:02:29.353636] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:34.571 request: 00:23:34.571 { 00:23:34.571 "name": "nvme_second", 00:23:34.571 "trtype": "tcp", 00:23:34.571 "traddr": "10.0.0.2", 00:23:34.571 "adrfam": "ipv4", 00:23:34.571 "trsvcid": "8010", 00:23:34.571 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:34.571 "wait_for_attach": false, 00:23:34.571 "attach_timeout_ms": 3000, 00:23:34.571 "method": "bdev_nvme_start_discovery", 00:23:34.571 "req_id": 1 00:23:34.571 } 00:23:34.571 Got JSON-RPC error response 00:23:34.571 response: 00:23:34.571 { 00:23:34.571 "code": -110, 00:23:34.571 "message": "Connection timed out" 00:23:34.571 } 00:23:34.571 14:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:34.571 14:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:34.571 14:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:34.571 14:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:34.571 14:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:34.571 14:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:34.571 14:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:34.571 14:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:34.571 14:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.571 14:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:34.571 14:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:34.571 14:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:34.571 14:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.571 14:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:34.571 14:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:34.571 14:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3824552 00:23:34.571 14:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:34.571 14:02:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:34.571 14:02:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:23:34.571 14:02:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:34.571 14:02:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:23:34.571 14:02:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:34.571 14:02:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:34.571 rmmod nvme_tcp 00:23:34.829 rmmod nvme_fabrics 00:23:34.829 rmmod nvme_keyring 00:23:34.829 14:02:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:34.829 14:02:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:23:34.829 14:02:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:23:34.829 14:02:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3824520 ']' 00:23:34.829 14:02:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3824520 00:23:34.829 14:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 3824520 ']' 00:23:34.829 14:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 3824520 00:23:34.829 14:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:23:34.829 14:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:34.829 14:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3824520 00:23:34.829 14:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:34.829 14:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:34.829 14:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3824520' 00:23:34.829 killing process with pid 3824520 00:23:34.829 14:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 3824520 00:23:34.829 14:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 3824520 00:23:35.087 14:02:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:35.087 14:02:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:35.087 14:02:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:35.087 14:02:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:35.087 14:02:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:35.087 14:02:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.087 14:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:35.087 14:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.990 14:02:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:36.990 00:23:36.990 real 0m13.618s 00:23:36.990 user 0m19.649s 00:23:36.990 sys 0m2.943s 00:23:36.990 14:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:36.990 14:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:36.990 ************************************ 00:23:36.990 END TEST nvmf_host_discovery 00:23:36.990 ************************************ 00:23:36.990 14:02:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:36.990 14:02:31 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:36.990 14:02:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:36.990 14:02:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:36.990 14:02:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:37.248 ************************************ 00:23:37.248 START TEST nvmf_host_multipath_status 00:23:37.248 ************************************ 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:37.248 * Looking for test storage... 00:23:37.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:37.248 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:37.249 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:37.249 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:37.249 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:37.249 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:37.249 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:37.249 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:37.249 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:37.249 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:37.249 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:37.249 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:37.249 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.249 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:37.249 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.249 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:37.249 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:37.249 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:23:37.249 14:02:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:39.149 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:39.149 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:39.149 Found net devices under 0000:84:00.0: cvl_0_0 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:39.149 Found net devices under 0000:84:00.1: cvl_0_1 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:39.149 14:02:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:39.409 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:39.409 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:39.409 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:39.409 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:39.409 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:39.409 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:39.409 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:39.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:23:39.409 00:23:39.409 --- 10.0.0.2 ping statistics --- 00:23:39.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.409 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:23:39.409 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:39.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:23:39.409 00:23:39.409 --- 10.0.0.1 ping statistics --- 00:23:39.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.409 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:23:39.409 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.409 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:23:39.409 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:39.409 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.409 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:39.409 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:39.409 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.409 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:39.409 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:39.409 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:39.409 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:39.409 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:39.409 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:39.409 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3827719 00:23:39.410 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:39.410 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3827719 00:23:39.410 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3827719 ']' 00:23:39.410 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.410 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:39.410 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.410 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:39.410 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:39.410 [2024-07-15 14:02:34.159279] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:23:39.410 [2024-07-15 14:02:34.159364] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.410 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.410 [2024-07-15 14:02:34.223014] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:39.668 [2024-07-15 14:02:34.335614] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.668 [2024-07-15 14:02:34.335681] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.668 [2024-07-15 14:02:34.335694] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.668 [2024-07-15 14:02:34.335705] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.668 [2024-07-15 14:02:34.335714] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.668 [2024-07-15 14:02:34.335796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.668 [2024-07-15 14:02:34.335801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.668 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:39.668 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:23:39.668 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:39.668 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:39.668 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:39.668 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.668 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3827719 00:23:39.668 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:39.925 [2024-07-15 14:02:34.753833] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.182 14:02:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:40.440 Malloc0 00:23:40.440 14:02:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:40.698 14:02:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:40.956 14:02:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:40.956 [2024-07-15 14:02:35.771523] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.956 14:02:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:41.215 [2024-07-15 14:02:36.036229] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:41.475 14:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3828001 00:23:41.475 14:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:41.475 14:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:41.475 14:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3828001 /var/tmp/bdevperf.sock 00:23:41.476 14:02:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3828001 ']' 00:23:41.476 14:02:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.476 14:02:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:41.476 14:02:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.476 14:02:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:41.476 14:02:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:41.734 14:02:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:41.734 14:02:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:23:41.734 14:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:41.991 14:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:42.249 Nvme0n1 00:23:42.249 14:02:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:42.818 Nvme0n1 00:23:42.818 14:02:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:42.818 14:02:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:44.744 14:02:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:44.744 14:02:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:45.001 14:02:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:45.261 14:02:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:46.636 14:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:46.636 14:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:46.636 14:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.636 14:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:46.636 14:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:46.636 14:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:46.636 14:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.636 14:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:46.894 14:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:46.894 14:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:46.894 14:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.894 14:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:47.152 14:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.152 14:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:47.152 14:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.152 14:02:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:47.409 14:02:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.409 14:02:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:47.409 14:02:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.409 14:02:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:47.667 14:02:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.667 14:02:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:47.667 14:02:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.667 14:02:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:48.232 14:02:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.232 14:02:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:48.232 14:02:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:48.232 14:02:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:48.490 14:02:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:49.866 14:02:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:49.866 14:02:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:49.866 14:02:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.866 14:02:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:49.866 14:02:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:49.866 14:02:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:49.866 14:02:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.866 14:02:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:50.124 14:02:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.124 14:02:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:50.124 14:02:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.124 14:02:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:50.382 14:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.382 14:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:50.382 14:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.382 14:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:50.640 14:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.640 14:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:50.640 14:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.640 14:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:50.897 14:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.897 14:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:50.897 14:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.897 14:02:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:51.466 14:02:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.466 14:02:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:51.466 14:02:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:51.466 14:02:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:51.725 14:02:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:53.104 14:02:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:53.104 14:02:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:53.104 14:02:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.104 14:02:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:53.104 14:02:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.104 14:02:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:53.104 14:02:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.104 14:02:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:53.361 14:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:53.361 14:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:53.361 14:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.361 14:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:53.618 14:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.618 14:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:53.618 14:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.618 14:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:53.876 14:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.876 14:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:53.876 14:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.876 14:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:54.134 14:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.134 14:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:54.134 14:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.134 14:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:54.701 14:02:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.702 14:02:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:54.702 14:02:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:54.702 14:02:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:54.959 14:02:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:56.340 14:02:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:56.340 14:02:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:56.340 14:02:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.340 14:02:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:56.340 14:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.340 14:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:56.340 14:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.340 14:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:56.597 14:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:56.597 14:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:56.597 14:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.597 14:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:56.854 14:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.854 14:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:56.854 14:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.854 14:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:57.112 14:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.112 14:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:57.112 14:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.112 14:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:57.370 14:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.370 14:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:57.370 14:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.370 14:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:57.939 14:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:57.939 14:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:57.939 14:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:57.939 14:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:58.198 14:02:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:59.571 14:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:59.571 14:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:59.571 14:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.571 14:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:59.571 14:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:59.571 14:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:59.571 14:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.571 14:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:59.827 14:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:59.827 14:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:59.827 14:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.827 14:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:00.084 14:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.084 14:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:00.084 14:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.084 14:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:00.343 14:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.343 14:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:00.343 14:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.343 14:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:00.600 14:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:00.600 14:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:00.600 14:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.600 14:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:00.858 14:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:00.858 14:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:00.858 14:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:01.116 14:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:01.373 14:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:02.310 14:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:02.310 14:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:02.310 14:02:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.310 14:02:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:02.568 14:02:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:02.568 14:02:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:02.568 14:02:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.568 14:02:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:02.825 14:02:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.825 14:02:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:02.825 14:02:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.825 14:02:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:03.083 14:02:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.083 14:02:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:03.083 14:02:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.083 14:02:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:03.339 14:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.339 14:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:03.339 14:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.339 14:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:03.595 14:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:03.595 14:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:03.595 14:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.595 14:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:03.852 14:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.852 14:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:04.109 14:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:04.109 14:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:04.674 14:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:04.674 14:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:06.050 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:06.050 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:06.050 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.050 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:06.050 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.050 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:06.050 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.050 14:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:06.307 14:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.307 14:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:06.307 14:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.307 14:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:06.564 14:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.564 14:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:06.564 14:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.564 14:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:06.822 14:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.822 14:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:06.822 14:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.822 14:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:07.079 14:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.079 14:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:07.338 14:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.338 14:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:07.628 14:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.628 14:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:07.628 14:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:07.910 14:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:08.167 14:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:09.100 14:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:09.100 14:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:09.100 14:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.100 14:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:09.357 14:03:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:09.357 14:03:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:09.357 14:03:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.357 14:03:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:09.614 14:03:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.614 14:03:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:09.614 14:03:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.614 14:03:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:09.871 14:03:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.871 14:03:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:09.871 14:03:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.871 14:03:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:10.129 14:03:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:10.129 14:03:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:10.129 14:03:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.129 14:03:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:10.387 14:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:10.387 14:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:10.387 14:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.387 14:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:10.686 14:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:10.686 14:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:10.686 14:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:10.943 14:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:11.508 14:03:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:12.440 14:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:12.440 14:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:12.440 14:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.440 14:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:12.697 14:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.697 14:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:12.697 14:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.697 14:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:12.954 14:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.955 14:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:12.955 14:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.955 14:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:13.213 14:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.213 14:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:13.213 14:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.213 14:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:13.470 14:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.470 14:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:13.470 14:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.470 14:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:13.727 14:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.727 14:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:13.727 14:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.727 14:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:13.985 14:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.985 14:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:13.985 14:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:14.550 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:14.807 14:03:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:15.742 14:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:15.742 14:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:15.742 14:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.742 14:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:15.999 14:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:15.999 14:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:15.999 14:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.999 14:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:16.256 14:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:16.256 14:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:16.256 14:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.256 14:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:16.513 14:03:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.513 14:03:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:16.513 14:03:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.513 14:03:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:16.770 14:03:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.770 14:03:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:16.770 14:03:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.770 14:03:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:17.027 14:03:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.027 14:03:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:17.027 14:03:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.027 14:03:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:17.284 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:17.284 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3828001 00:24:17.284 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3828001 ']' 00:24:17.284 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3828001 00:24:17.284 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:17.284 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:17.284 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3828001 00:24:17.542 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:17.542 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:17.542 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3828001' 00:24:17.542 killing process with pid 3828001 00:24:17.542 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3828001 00:24:17.542 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3828001 00:24:17.542 Connection closed with partial response: 00:24:17.542 00:24:17.542 00:24:17.811 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3828001 00:24:17.811 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:17.811 [2024-07-15 14:02:36.100672] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:24:17.811 [2024-07-15 14:02:36.100795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3828001 ] 00:24:17.811 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.811 [2024-07-15 14:02:36.160846] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.811 [2024-07-15 14:02:36.268115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.811 Running I/O for 90 seconds... 00:24:17.811 [2024-07-15 14:02:52.736707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.736806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.736885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.736907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.736931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.736948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.736978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.736994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.737016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.737054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.737077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.737093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.737114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.737131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.737152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.737168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.737244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.737265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.737291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.737316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.737338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.737364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.737393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.737409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.737430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.737446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.737467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.737482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.737504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.737519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.737540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.737565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.737622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.737642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.737667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.737684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.737706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.737746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.737772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.737789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.737812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.737828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.737851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.737866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.737889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.737905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.737933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.737950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.738090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.738111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.738138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.738165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.738188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.738204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.738226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.738242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.738264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.738280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.738302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.738318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.738341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.811 [2024-07-15 14:02:52.738357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:17.811 [2024-07-15 14:02:52.738380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.812 [2024-07-15 14:02:52.738396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.739442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.812 [2024-07-15 14:02:52.739479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.739507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.812 [2024-07-15 14:02:52.739526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.739551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.812 [2024-07-15 14:02:52.739567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.739596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.812 [2024-07-15 14:02:52.739613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.739645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.812 [2024-07-15 14:02:52.739662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.739685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.812 [2024-07-15 14:02:52.739703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.739727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.812 [2024-07-15 14:02:52.739769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.739798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:28864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.739815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.739840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.739857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.739881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.739898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.739932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.739948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.739973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:28896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.739990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.740014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.740031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.740070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:28912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.740087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.740111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.740127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.740150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.740170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.740196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.740212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.740289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:28944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.740309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.740338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:28952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.740355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.740380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.740396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.740421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:28968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.740437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.740462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.740478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.740502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:28984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.740518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.740543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.812 [2024-07-15 14:02:52.740559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.740584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:28992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.740600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.740624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:29000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.740640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.740665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:29008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.740681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.740705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.740725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.740779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:29024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.740798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.740824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:29032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.740840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.740866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:29040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.740882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.740908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:29048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.740924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.740950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.740966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.740992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.741008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.741034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:29072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.741065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.741091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:29080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.741107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.741133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:29088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.741149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.741174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.741190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.741214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:29104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.741231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.741255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.741275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:17.812 [2024-07-15 14:02:52.741301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.812 [2024-07-15 14:02:52.741317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.741343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:29128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.741360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.741385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:29136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.741401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.741426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:29144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.741443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.741468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:29152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.741484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.741509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:29160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.741524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.741549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.741565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.741590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:29176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.741606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.741631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.741647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.741672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.741688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.741713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.741729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.741775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:29208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.741794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.741824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.741842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.741867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.741884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.741909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:29232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.741926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.741952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:29240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.741968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.741993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:29248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.742010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.742035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.742052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.742093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:29264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.742109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.742144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:29272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.742160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.742185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.742201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.742226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.742242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.742268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:29296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.742284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.742310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:29304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.742326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.742463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:29312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.742484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.742516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.742539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.742568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.742584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.742612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:29336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.742628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.742657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:29344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.742673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.742701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:29352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.742732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.742772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.742795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.742825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:29368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.742843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.742877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:29376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.742894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.742924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:29384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.742941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.742970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:29392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.742987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.743017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.743034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.743077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.743098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.743128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.743144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.743173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:29424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.743189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.743218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:29432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.743234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.743262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.743279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.743307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.743323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:17.813 [2024-07-15 14:02:52.743351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:29456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.813 [2024-07-15 14:02:52.743368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:02:52.743396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.814 [2024-07-15 14:02:52.743413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:02:52.743441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:29472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.814 [2024-07-15 14:02:52.743457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:02:52.743486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:29480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.814 [2024-07-15 14:02:52.743503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:02:52.743531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.814 [2024-07-15 14:02:52.743548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:02:52.743576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.814 [2024-07-15 14:02:52.743593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:02:52.743621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.814 [2024-07-15 14:02:52.743642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:02:52.743671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:29504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.814 [2024-07-15 14:02:52.743688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:02:52.743717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.814 [2024-07-15 14:02:52.743734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:02:52.743787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:29520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.814 [2024-07-15 14:02:52.743805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:02:52.743835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:29528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.814 [2024-07-15 14:02:52.743851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:02:52.743881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:29536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.814 [2024-07-15 14:02:52.743898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:02:52.743927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.814 [2024-07-15 14:02:52.743944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:02:52.743973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.814 [2024-07-15 14:02:52.743990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:03:09.398119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.814 [2024-07-15 14:03:09.398180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:03:09.398232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.814 [2024-07-15 14:03:09.398251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:03:09.398274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.814 [2024-07-15 14:03:09.398291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:03:09.398313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.814 [2024-07-15 14:03:09.398328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:03:09.398350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.814 [2024-07-15 14:03:09.398365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:03:09.398399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.814 [2024-07-15 14:03:09.398416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:03:09.398437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.814 [2024-07-15 14:03:09.398452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:03:09.398473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.814 [2024-07-15 14:03:09.398489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:03:09.398510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.814 [2024-07-15 14:03:09.398525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:03:09.398545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.814 [2024-07-15 14:03:09.398561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:03:09.398582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.814 [2024-07-15 14:03:09.398598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:03:09.398619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.814 [2024-07-15 14:03:09.398636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:03:09.398657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.814 [2024-07-15 14:03:09.398672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:03:09.398694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.814 [2024-07-15 14:03:09.398709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:03:09.400619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.814 [2024-07-15 14:03:09.400644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:03:09.400686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.814 [2024-07-15 14:03:09.400704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:03:09.400750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.814 [2024-07-15 14:03:09.400769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:03:09.400801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.814 [2024-07-15 14:03:09.400819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:03:09.400841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.814 [2024-07-15 14:03:09.400857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:03:09.400879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.814 [2024-07-15 14:03:09.400896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:03:09.400918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.814 [2024-07-15 14:03:09.400934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:03:09.400956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.814 [2024-07-15 14:03:09.400972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:03:09.400994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.814 [2024-07-15 14:03:09.401009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:03:09.401050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.814 [2024-07-15 14:03:09.401065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:03:09.401086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.814 [2024-07-15 14:03:09.401102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:03:09.401123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.814 [2024-07-15 14:03:09.401138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:03:09.401159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.814 [2024-07-15 14:03:09.401174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:03:09.401195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.814 [2024-07-15 14:03:09.401211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:17.814 [2024-07-15 14:03:09.401232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.815 [2024-07-15 14:03:09.401247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.401268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.815 [2024-07-15 14:03:09.401288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.401311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.815 [2024-07-15 14:03:09.401327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.401348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.815 [2024-07-15 14:03:09.401364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.401384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.815 [2024-07-15 14:03:09.401400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.401421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.815 [2024-07-15 14:03:09.401437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.401458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.815 [2024-07-15 14:03:09.401473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.401495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.815 [2024-07-15 14:03:09.401511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.401532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.815 [2024-07-15 14:03:09.401548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.401569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.815 [2024-07-15 14:03:09.401585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.401606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.815 [2024-07-15 14:03:09.401622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.401643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.815 [2024-07-15 14:03:09.401659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.401680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.815 [2024-07-15 14:03:09.401695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.401733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.815 [2024-07-15 14:03:09.401761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.401794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.815 [2024-07-15 14:03:09.401811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.401833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.815 [2024-07-15 14:03:09.401850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.401871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.815 [2024-07-15 14:03:09.401888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.401909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.815 [2024-07-15 14:03:09.401926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.401948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.815 [2024-07-15 14:03:09.401964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.401986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.815 [2024-07-15 14:03:09.402002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.402035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.815 [2024-07-15 14:03:09.402051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.402514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.815 [2024-07-15 14:03:09.402536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.402562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.815 [2024-07-15 14:03:09.402580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.402602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.815 [2024-07-15 14:03:09.402617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.402638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.815 [2024-07-15 14:03:09.402654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.402674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.815 [2024-07-15 14:03:09.402690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.402716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.815 [2024-07-15 14:03:09.402760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.402784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.815 [2024-07-15 14:03:09.402801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.402822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.815 [2024-07-15 14:03:09.402839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.402861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.815 [2024-07-15 14:03:09.402877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.402905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.815 [2024-07-15 14:03:09.402921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.402943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.815 [2024-07-15 14:03:09.402959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.402980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.815 [2024-07-15 14:03:09.402996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.403017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.815 [2024-07-15 14:03:09.403055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:17.815 [2024-07-15 14:03:09.403077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.816 [2024-07-15 14:03:09.403096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.403116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.816 [2024-07-15 14:03:09.403131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.403153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.816 [2024-07-15 14:03:09.403169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.403191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.816 [2024-07-15 14:03:09.403206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.403232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.816 [2024-07-15 14:03:09.403249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.403270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.816 [2024-07-15 14:03:09.403285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.403306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.816 [2024-07-15 14:03:09.403322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.403343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.816 [2024-07-15 14:03:09.403359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.403380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.816 [2024-07-15 14:03:09.403396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.403417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.816 [2024-07-15 14:03:09.403432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.403453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.816 [2024-07-15 14:03:09.403469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.403489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.816 [2024-07-15 14:03:09.403504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.403525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.816 [2024-07-15 14:03:09.403540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.403562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.816 [2024-07-15 14:03:09.403577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.405090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.816 [2024-07-15 14:03:09.405113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.405140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.816 [2024-07-15 14:03:09.405158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.405179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.816 [2024-07-15 14:03:09.405200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.405223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.816 [2024-07-15 14:03:09.405240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.405261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.816 [2024-07-15 14:03:09.405276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.405297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.816 [2024-07-15 14:03:09.405312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.405334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.816 [2024-07-15 14:03:09.405349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.405370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.816 [2024-07-15 14:03:09.405386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.405406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.816 [2024-07-15 14:03:09.405422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.405443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.816 [2024-07-15 14:03:09.405458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.405479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.816 [2024-07-15 14:03:09.405494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.405515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.816 [2024-07-15 14:03:09.405531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.405551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.816 [2024-07-15 14:03:09.405567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.405587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.816 [2024-07-15 14:03:09.405602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.405623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.816 [2024-07-15 14:03:09.405643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.405665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.816 [2024-07-15 14:03:09.405680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.405701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.816 [2024-07-15 14:03:09.405732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.405764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.816 [2024-07-15 14:03:09.405781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.405803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.816 [2024-07-15 14:03:09.405819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.405841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.816 [2024-07-15 14:03:09.405856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.405878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.816 [2024-07-15 14:03:09.405894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.405915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.816 [2024-07-15 14:03:09.405931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.405953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.816 [2024-07-15 14:03:09.405968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.405989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.816 [2024-07-15 14:03:09.406004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.406026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.816 [2024-07-15 14:03:09.406057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.406079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.816 [2024-07-15 14:03:09.406094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.406115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.816 [2024-07-15 14:03:09.406134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.816 [2024-07-15 14:03:09.406156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.817 [2024-07-15 14:03:09.406172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.406193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.817 [2024-07-15 14:03:09.406208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.406229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.817 [2024-07-15 14:03:09.406245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.406266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.817 [2024-07-15 14:03:09.406281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.406302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.817 [2024-07-15 14:03:09.406318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.406339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.817 [2024-07-15 14:03:09.406354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.406375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.817 [2024-07-15 14:03:09.406391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.406412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.817 [2024-07-15 14:03:09.406427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.406448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.817 [2024-07-15 14:03:09.406463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.406484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.817 [2024-07-15 14:03:09.406500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.406521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.817 [2024-07-15 14:03:09.406537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.406558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.817 [2024-07-15 14:03:09.406573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.406598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.817 [2024-07-15 14:03:09.406615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.406636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.817 [2024-07-15 14:03:09.406651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.406672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.817 [2024-07-15 14:03:09.406688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.406709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.817 [2024-07-15 14:03:09.406746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.406771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.817 [2024-07-15 14:03:09.406788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.406810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.817 [2024-07-15 14:03:09.406826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.406848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.817 [2024-07-15 14:03:09.406864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.406886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.817 [2024-07-15 14:03:09.406902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.406923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.817 [2024-07-15 14:03:09.406939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.406960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.817 [2024-07-15 14:03:09.406976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.406998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.817 [2024-07-15 14:03:09.407014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.407664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.817 [2024-07-15 14:03:09.407686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.407732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.817 [2024-07-15 14:03:09.407760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.409874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.817 [2024-07-15 14:03:09.409900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.409927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.817 [2024-07-15 14:03:09.409946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.409968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.817 [2024-07-15 14:03:09.409984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.410006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.817 [2024-07-15 14:03:09.410045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.410067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.817 [2024-07-15 14:03:09.410082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.410103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.817 [2024-07-15 14:03:09.410118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.410139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.817 [2024-07-15 14:03:09.410154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.410175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.817 [2024-07-15 14:03:09.410191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.410212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.817 [2024-07-15 14:03:09.410227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.410248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.817 [2024-07-15 14:03:09.410263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.410284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.817 [2024-07-15 14:03:09.410300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.410320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.817 [2024-07-15 14:03:09.410340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.410363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.817 [2024-07-15 14:03:09.410379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.410400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.817 [2024-07-15 14:03:09.410415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.410435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.817 [2024-07-15 14:03:09.410451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.410471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.817 [2024-07-15 14:03:09.410487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:17.817 [2024-07-15 14:03:09.410507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.817 [2024-07-15 14:03:09.410522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.410543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-15 14:03:09.410558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.410579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.818 [2024-07-15 14:03:09.410594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.410615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-15 14:03:09.410630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.410651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-15 14:03:09.410667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.410687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.818 [2024-07-15 14:03:09.410702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.410755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.818 [2024-07-15 14:03:09.410775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.410798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.818 [2024-07-15 14:03:09.410818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.410841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.818 [2024-07-15 14:03:09.410858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.410880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.818 [2024-07-15 14:03:09.410896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.410917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.818 [2024-07-15 14:03:09.410933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.410954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.818 [2024-07-15 14:03:09.410969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.410991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.818 [2024-07-15 14:03:09.411007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.411028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.818 [2024-07-15 14:03:09.411060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.411081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-15 14:03:09.411097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.411135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-15 14:03:09.411151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.411172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.818 [2024-07-15 14:03:09.411188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.411209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-15 14:03:09.411225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.411246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.818 [2024-07-15 14:03:09.411262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.411284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-15 14:03:09.411301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.411327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.818 [2024-07-15 14:03:09.411344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.411366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-15 14:03:09.411383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.412174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-15 14:03:09.412198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.412233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-15 14:03:09.412250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.412271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-15 14:03:09.412290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.412311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-15 14:03:09.412327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.412357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-15 14:03:09.412373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.412394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-15 14:03:09.412409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.412430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.818 [2024-07-15 14:03:09.412446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.412466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.818 [2024-07-15 14:03:09.412481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.412502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.818 [2024-07-15 14:03:09.412518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.412539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.818 [2024-07-15 14:03:09.412555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.412581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.818 [2024-07-15 14:03:09.412598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.412619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.818 [2024-07-15 14:03:09.412635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.412656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.818 [2024-07-15 14:03:09.412671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.412692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.818 [2024-07-15 14:03:09.412708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.412755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.818 [2024-07-15 14:03:09.412774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.412796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-15 14:03:09.412813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.412834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-15 14:03:09.412851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.412872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-15 14:03:09.412888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.413625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-15 14:03:09.413648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.413674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.818 [2024-07-15 14:03:09.413691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:17.818 [2024-07-15 14:03:09.413713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.818 [2024-07-15 14:03:09.413752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.413777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.819 [2024-07-15 14:03:09.413794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.413815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.819 [2024-07-15 14:03:09.413838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.413861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.819 [2024-07-15 14:03:09.413878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.413899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.819 [2024-07-15 14:03:09.413915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.413936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.819 [2024-07-15 14:03:09.413951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.413973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.819 [2024-07-15 14:03:09.413989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.414010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.819 [2024-07-15 14:03:09.414041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.414062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.819 [2024-07-15 14:03:09.414077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.414113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.819 [2024-07-15 14:03:09.414128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.414148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.819 [2024-07-15 14:03:09.414163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.414182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.819 [2024-07-15 14:03:09.414197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.414217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.819 [2024-07-15 14:03:09.414232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.414252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.819 [2024-07-15 14:03:09.414267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.414287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.819 [2024-07-15 14:03:09.414306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.414327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.819 [2024-07-15 14:03:09.414342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.414362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.819 [2024-07-15 14:03:09.414378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.414398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.819 [2024-07-15 14:03:09.414412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.414432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.819 [2024-07-15 14:03:09.414447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.414467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.819 [2024-07-15 14:03:09.414482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.414502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.819 [2024-07-15 14:03:09.414517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.414537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.819 [2024-07-15 14:03:09.414551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.414571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.819 [2024-07-15 14:03:09.414586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.414607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.819 [2024-07-15 14:03:09.414622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.415632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.819 [2024-07-15 14:03:09.415656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.415681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.819 [2024-07-15 14:03:09.415698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.415734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.819 [2024-07-15 14:03:09.415760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.415790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.819 [2024-07-15 14:03:09.415807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.415829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.819 [2024-07-15 14:03:09.415845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.415866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.819 [2024-07-15 14:03:09.415882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.415904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.819 [2024-07-15 14:03:09.415920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.415942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.819 [2024-07-15 14:03:09.415958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.415979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.819 [2024-07-15 14:03:09.415995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.416033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.819 [2024-07-15 14:03:09.416049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.416070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.819 [2024-07-15 14:03:09.416101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.416122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.819 [2024-07-15 14:03:09.416137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.416158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.819 [2024-07-15 14:03:09.416173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:17.819 [2024-07-15 14:03:09.417559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.819 [2024-07-15 14:03:09.417582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.417622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.820 [2024-07-15 14:03:09.417639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.417664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.820 [2024-07-15 14:03:09.417680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.417701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.820 [2024-07-15 14:03:09.417716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.417765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.820 [2024-07-15 14:03:09.417783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.417805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.820 [2024-07-15 14:03:09.417821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.417842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.820 [2024-07-15 14:03:09.417858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.417880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.820 [2024-07-15 14:03:09.417895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.417917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.820 [2024-07-15 14:03:09.417933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.417954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.820 [2024-07-15 14:03:09.417969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.417991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.820 [2024-07-15 14:03:09.418006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.418045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.820 [2024-07-15 14:03:09.418061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.418082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.820 [2024-07-15 14:03:09.418112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.418132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.820 [2024-07-15 14:03:09.418147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.418167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.820 [2024-07-15 14:03:09.418186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.418207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.820 [2024-07-15 14:03:09.418223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.418242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.820 [2024-07-15 14:03:09.418257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.418277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.820 [2024-07-15 14:03:09.418292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.418312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.820 [2024-07-15 14:03:09.418326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.418347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.820 [2024-07-15 14:03:09.418361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.418382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.820 [2024-07-15 14:03:09.418396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.418416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.820 [2024-07-15 14:03:09.418431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.418451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.820 [2024-07-15 14:03:09.418466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.418486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.820 [2024-07-15 14:03:09.418500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.418520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.820 [2024-07-15 14:03:09.418535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.418555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.820 [2024-07-15 14:03:09.418570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.418589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.820 [2024-07-15 14:03:09.418608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.418629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.820 [2024-07-15 14:03:09.418644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.418665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.820 [2024-07-15 14:03:09.418680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.418700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.820 [2024-07-15 14:03:09.418730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.418764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.820 [2024-07-15 14:03:09.418781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.418803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.820 [2024-07-15 14:03:09.418818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:17.820 [2024-07-15 14:03:09.418839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.821 [2024-07-15 14:03:09.418855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.418876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.821 [2024-07-15 14:03:09.418892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.418914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.821 [2024-07-15 14:03:09.418930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.418952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.821 [2024-07-15 14:03:09.418969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.421210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.821 [2024-07-15 14:03:09.421234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.421274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.821 [2024-07-15 14:03:09.421292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.421312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.821 [2024-07-15 14:03:09.421327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.421353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.821 [2024-07-15 14:03:09.421379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.421400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.821 [2024-07-15 14:03:09.421415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.421435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.821 [2024-07-15 14:03:09.421450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.421471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.821 [2024-07-15 14:03:09.421485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.421505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.821 [2024-07-15 14:03:09.421520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.421540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.821 [2024-07-15 14:03:09.421555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.421575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.821 [2024-07-15 14:03:09.421590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.421610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.821 [2024-07-15 14:03:09.421625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.421645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.821 [2024-07-15 14:03:09.421659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.421683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.821 [2024-07-15 14:03:09.421698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.421734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.821 [2024-07-15 14:03:09.421761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.421785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.821 [2024-07-15 14:03:09.421810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.421837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.821 [2024-07-15 14:03:09.421854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.421875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.821 [2024-07-15 14:03:09.421891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.421912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.821 [2024-07-15 14:03:09.421928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.421950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.821 [2024-07-15 14:03:09.421966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.421988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.821 [2024-07-15 14:03:09.422004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.422040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.821 [2024-07-15 14:03:09.422056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.422077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.821 [2024-07-15 14:03:09.422092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.422128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.821 [2024-07-15 14:03:09.422148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.422168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.821 [2024-07-15 14:03:09.422184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.422204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.821 [2024-07-15 14:03:09.422219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.422239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.821 [2024-07-15 14:03:09.422254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.422274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.821 [2024-07-15 14:03:09.422289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.422309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.821 [2024-07-15 14:03:09.422327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.422350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.821 [2024-07-15 14:03:09.422365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.423345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.821 [2024-07-15 14:03:09.423367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.423392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.821 [2024-07-15 14:03:09.423408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.423429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.821 [2024-07-15 14:03:09.423446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.423467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.821 [2024-07-15 14:03:09.423482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.423502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.821 [2024-07-15 14:03:09.423517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.423537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.821 [2024-07-15 14:03:09.423552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.423572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.821 [2024-07-15 14:03:09.423592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.423612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.821 [2024-07-15 14:03:09.423627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:17.821 [2024-07-15 14:03:09.423647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.821 [2024-07-15 14:03:09.423662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.423682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.822 [2024-07-15 14:03:09.423697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.423717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.822 [2024-07-15 14:03:09.423761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.423788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.822 [2024-07-15 14:03:09.423805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.423827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.822 [2024-07-15 14:03:09.423843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.423865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.822 [2024-07-15 14:03:09.423881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.423902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.822 [2024-07-15 14:03:09.423918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.423940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.822 [2024-07-15 14:03:09.423965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.423987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.822 [2024-07-15 14:03:09.424002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.424403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.822 [2024-07-15 14:03:09.424425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.424450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.822 [2024-07-15 14:03:09.424466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.424487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.822 [2024-07-15 14:03:09.424503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.424523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.822 [2024-07-15 14:03:09.424538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.424558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.822 [2024-07-15 14:03:09.424573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.424594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.822 [2024-07-15 14:03:09.424615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.424641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.822 [2024-07-15 14:03:09.424657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.424688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.822 [2024-07-15 14:03:09.424703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.424747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.822 [2024-07-15 14:03:09.424765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.424787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.822 [2024-07-15 14:03:09.424803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.424826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.822 [2024-07-15 14:03:09.424842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.424864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.822 [2024-07-15 14:03:09.424879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.424901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.822 [2024-07-15 14:03:09.424917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.424939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.822 [2024-07-15 14:03:09.424955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.424976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.822 [2024-07-15 14:03:09.424992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.425019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.822 [2024-07-15 14:03:09.425051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.425073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.822 [2024-07-15 14:03:09.425102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.425124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.822 [2024-07-15 14:03:09.425139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.425164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.822 [2024-07-15 14:03:09.425180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.425200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.822 [2024-07-15 14:03:09.425216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.425236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.822 [2024-07-15 14:03:09.425251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.425271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.822 [2024-07-15 14:03:09.425286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.425307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.822 [2024-07-15 14:03:09.425322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.425342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.822 [2024-07-15 14:03:09.425357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.425378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.822 [2024-07-15 14:03:09.425392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.425413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.822 [2024-07-15 14:03:09.425428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.426014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.822 [2024-07-15 14:03:09.426052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.426078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.822 [2024-07-15 14:03:09.426095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.426124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.822 [2024-07-15 14:03:09.426139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.426160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.822 [2024-07-15 14:03:09.426174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.426195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.822 [2024-07-15 14:03:09.426214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.426236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.822 [2024-07-15 14:03:09.426251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.426271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.822 [2024-07-15 14:03:09.426286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:17.822 [2024-07-15 14:03:09.426307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.822 [2024-07-15 14:03:09.426322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.426342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.823 [2024-07-15 14:03:09.426357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.426377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.823 [2024-07-15 14:03:09.426392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.426412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.823 [2024-07-15 14:03:09.426427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.426446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.823 [2024-07-15 14:03:09.426461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.426481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.823 [2024-07-15 14:03:09.426495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.426515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.823 [2024-07-15 14:03:09.426530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.426550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.823 [2024-07-15 14:03:09.426565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.426585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.823 [2024-07-15 14:03:09.426600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.426621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.823 [2024-07-15 14:03:09.426649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.428002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.823 [2024-07-15 14:03:09.428025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.428066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.823 [2024-07-15 14:03:09.428083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.428104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.823 [2024-07-15 14:03:09.428119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.428142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.823 [2024-07-15 14:03:09.428157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.428177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.823 [2024-07-15 14:03:09.428199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.428219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.823 [2024-07-15 14:03:09.428234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.428254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.823 [2024-07-15 14:03:09.428268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.428289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.823 [2024-07-15 14:03:09.428304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.428324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.823 [2024-07-15 14:03:09.428339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.428359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.823 [2024-07-15 14:03:09.428374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.428394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.823 [2024-07-15 14:03:09.428408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.428428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.823 [2024-07-15 14:03:09.428443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.428469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.823 [2024-07-15 14:03:09.428485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.428505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.823 [2024-07-15 14:03:09.428520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.428540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.823 [2024-07-15 14:03:09.428556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.428585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.823 [2024-07-15 14:03:09.428600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.428621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.823 [2024-07-15 14:03:09.428635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.428656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.823 [2024-07-15 14:03:09.428671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.428691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.823 [2024-07-15 14:03:09.428706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.428749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.823 [2024-07-15 14:03:09.428768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.428790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.823 [2024-07-15 14:03:09.428807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.428828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.823 [2024-07-15 14:03:09.428844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.428865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.823 [2024-07-15 14:03:09.428881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.428903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.823 [2024-07-15 14:03:09.428919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.428947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.823 [2024-07-15 14:03:09.428965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.428986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.823 [2024-07-15 14:03:09.429002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:17.823 [2024-07-15 14:03:09.429044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.823 [2024-07-15 14:03:09.429060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.431312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.824 [2024-07-15 14:03:09.431334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.431359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.824 [2024-07-15 14:03:09.431376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.431397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.824 [2024-07-15 14:03:09.431412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.431433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.824 [2024-07-15 14:03:09.431448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.431475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.824 [2024-07-15 14:03:09.431491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.431512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.824 [2024-07-15 14:03:09.431527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.431547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.824 [2024-07-15 14:03:09.431562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.431582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.824 [2024-07-15 14:03:09.431596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.431617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.824 [2024-07-15 14:03:09.431631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.431651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.824 [2024-07-15 14:03:09.431670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.431692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.824 [2024-07-15 14:03:09.431707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.431750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.824 [2024-07-15 14:03:09.431769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.431793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.824 [2024-07-15 14:03:09.431810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.431831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.824 [2024-07-15 14:03:09.431857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.431879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.824 [2024-07-15 14:03:09.431895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.431916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.824 [2024-07-15 14:03:09.431932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.431954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.824 [2024-07-15 14:03:09.431969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.431991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.824 [2024-07-15 14:03:09.432007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.432051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.824 [2024-07-15 14:03:09.432066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.432106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.824 [2024-07-15 14:03:09.432121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.432147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.824 [2024-07-15 14:03:09.432163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.432184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.824 [2024-07-15 14:03:09.432203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.432225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.824 [2024-07-15 14:03:09.432240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.432260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.824 [2024-07-15 14:03:09.432280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.433061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.824 [2024-07-15 14:03:09.433083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.433111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.824 [2024-07-15 14:03:09.433128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.433148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.824 [2024-07-15 14:03:09.433164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.433184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.824 [2024-07-15 14:03:09.433199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.433220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.824 [2024-07-15 14:03:09.433235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.433266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.824 [2024-07-15 14:03:09.433281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.433302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.824 [2024-07-15 14:03:09.433317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.433337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.824 [2024-07-15 14:03:09.433352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.433372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.824 [2024-07-15 14:03:09.433387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.433407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.824 [2024-07-15 14:03:09.433427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.433449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.824 [2024-07-15 14:03:09.433464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.433484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.824 [2024-07-15 14:03:09.433499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.433525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.824 [2024-07-15 14:03:09.433540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.433561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.824 [2024-07-15 14:03:09.433576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.433595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.824 [2024-07-15 14:03:09.433610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.433631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.824 [2024-07-15 14:03:09.433646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:17.824 [2024-07-15 14:03:09.433666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.824 [2024-07-15 14:03:09.433681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.434643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.825 [2024-07-15 14:03:09.434665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.434690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.825 [2024-07-15 14:03:09.434707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.434752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.825 [2024-07-15 14:03:09.434771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.434794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.825 [2024-07-15 14:03:09.434811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.434832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.825 [2024-07-15 14:03:09.434848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.434875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.825 [2024-07-15 14:03:09.434893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.891663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.825 [2024-07-15 14:03:09.891705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.891775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.825 [2024-07-15 14:03:09.891798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.891823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.825 [2024-07-15 14:03:09.891840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.891863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.825 [2024-07-15 14:03:09.891879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.891903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.825 [2024-07-15 14:03:09.891920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.891943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.825 [2024-07-15 14:03:09.891960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.891983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.825 [2024-07-15 14:03:09.891999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.892027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.825 [2024-07-15 14:03:09.892059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.892082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.825 [2024-07-15 14:03:09.892098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.892119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.825 [2024-07-15 14:03:09.892135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.892157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.825 [2024-07-15 14:03:09.892173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.892217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.825 [2024-07-15 14:03:09.892235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.892258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.825 [2024-07-15 14:03:09.892274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.892297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.825 [2024-07-15 14:03:09.892313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.892335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.825 [2024-07-15 14:03:09.892351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.892374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.825 [2024-07-15 14:03:09.892390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.892412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.825 [2024-07-15 14:03:09.892428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.892450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.825 [2024-07-15 14:03:09.892466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.892488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.825 [2024-07-15 14:03:09.892504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.892526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.825 [2024-07-15 14:03:09.892542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.892581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.825 [2024-07-15 14:03:09.892596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.892618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.825 [2024-07-15 14:03:09.892634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.892655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.825 [2024-07-15 14:03:09.892671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.892693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.825 [2024-07-15 14:03:09.892713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.892773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.825 [2024-07-15 14:03:09.892791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.892814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.825 [2024-07-15 14:03:09.892830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.892853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.825 [2024-07-15 14:03:09.892870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.892891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.825 [2024-07-15 14:03:09.892908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.892931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.825 [2024-07-15 14:03:09.892947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.895520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.825 [2024-07-15 14:03:09.895546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.895593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.825 [2024-07-15 14:03:09.895620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.895642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.825 [2024-07-15 14:03:09.895670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.895691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.825 [2024-07-15 14:03:09.895707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.895747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.825 [2024-07-15 14:03:09.895781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.895804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.825 [2024-07-15 14:03:09.895820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.895843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.825 [2024-07-15 14:03:09.895865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:17.825 [2024-07-15 14:03:09.895888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.826 [2024-07-15 14:03:09.895904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:17.826 [2024-07-15 14:03:09.895926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.826 [2024-07-15 14:03:09.895943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:17.826 [2024-07-15 14:03:09.895965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.826 [2024-07-15 14:03:09.895990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.826 [2024-07-15 14:03:09.896028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.826 [2024-07-15 14:03:09.896054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:17.826 Received shutdown signal, test time was about 34.450664 seconds 00:24:17.826 00:24:17.826 Latency(us) 00:24:17.826 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.826 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:17.826 Verification LBA range: start 0x0 length 0x4000 00:24:17.826 Nvme0n1 : 34.45 8258.70 32.26 0.00 0.00 15473.87 421.74 4026531.84 00:24:17.826 =================================================================================================================== 00:24:17.826 Total : 8258.70 32.26 0.00 0.00 15473.87 421.74 4026531.84 00:24:17.826 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:18.083 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:18.083 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:18.083 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:18.083 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:18.083 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:24:18.083 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:18.083 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:24:18.083 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:18.083 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:18.083 rmmod nvme_tcp 00:24:18.083 rmmod nvme_fabrics 00:24:18.083 rmmod nvme_keyring 00:24:18.083 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:18.083 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:24:18.083 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:24:18.083 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3827719 ']' 00:24:18.083 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3827719 00:24:18.083 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3827719 ']' 00:24:18.083 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3827719 00:24:18.083 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:18.083 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:18.083 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3827719 00:24:18.083 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:18.083 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:18.083 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3827719' 00:24:18.083 killing process with pid 3827719 00:24:18.083 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3827719 00:24:18.083 14:03:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3827719 00:24:18.340 14:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:18.340 14:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:18.340 14:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:18.340 14:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:18.340 14:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:18.340 14:03:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.340 14:03:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:18.340 14:03:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.241 14:03:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:20.241 00:24:20.241 real 0m43.240s 00:24:20.241 user 2m10.289s 00:24:20.241 sys 0m11.887s 00:24:20.241 14:03:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:20.241 14:03:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:20.241 ************************************ 00:24:20.241 END TEST nvmf_host_multipath_status 00:24:20.241 ************************************ 00:24:20.499 14:03:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:20.499 14:03:15 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:20.499 14:03:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:20.499 14:03:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:20.499 14:03:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:20.499 ************************************ 00:24:20.499 START TEST nvmf_discovery_remove_ifc 00:24:20.499 ************************************ 00:24:20.499 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:20.499 * Looking for test storage... 00:24:20.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:20.499 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:20.499 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:20.499 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:20.499 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:20.499 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:20.499 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:20.499 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:20.499 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:20.499 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:20.499 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:20.499 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:20.499 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:20.499 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:20.499 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:20.499 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:20.499 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:20.499 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:20.499 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:20.499 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:20.499 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.499 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.499 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.499 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.499 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.499 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.499 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:20.500 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.500 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:24:20.500 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:20.500 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:20.500 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:20.500 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:20.500 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:20.500 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:20.500 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:20.500 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:20.500 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:20.500 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:20.500 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:20.500 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:20.500 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:20.500 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:20.500 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:20.500 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:20.500 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:20.500 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:20.500 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:20.500 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:20.500 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.500 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:20.500 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.500 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:20.500 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:20.500 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:24:20.500 14:03:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:23.030 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:23.030 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:23.030 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:23.031 Found net devices under 0000:84:00.0: cvl_0_0 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:23.031 Found net devices under 0000:84:00.1: cvl_0_1 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:23.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:23.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:24:23.031 00:24:23.031 --- 10.0.0.2 ping statistics --- 00:24:23.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.031 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:23.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:23.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:24:23.031 00:24:23.031 --- 10.0.0.1 ping statistics --- 00:24:23.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.031 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3834979 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3834979 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3834979 ']' 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:23.031 [2024-07-15 14:03:17.518679] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:24:23.031 [2024-07-15 14:03:17.518776] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.031 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.031 [2024-07-15 14:03:17.583989] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.031 [2024-07-15 14:03:17.692794] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:23.031 [2024-07-15 14:03:17.692868] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:23.031 [2024-07-15 14:03:17.692884] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:23.031 [2024-07-15 14:03:17.692895] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:23.031 [2024-07-15 14:03:17.692906] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:23.031 [2024-07-15 14:03:17.692940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.031 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:23.031 [2024-07-15 14:03:17.829522] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.031 [2024-07-15 14:03:17.837672] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:23.031 null0 00:24:23.031 [2024-07-15 14:03:17.869663] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.290 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.290 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3835120 00:24:23.290 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:23.290 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3835120 /tmp/host.sock 00:24:23.290 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3835120 ']' 00:24:23.290 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:24:23.290 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:23.290 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:23.291 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:23.291 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:23.291 14:03:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:23.291 [2024-07-15 14:03:17.931621] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:24:23.291 [2024-07-15 14:03:17.931694] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3835120 ] 00:24:23.291 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.291 [2024-07-15 14:03:17.988202] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.291 [2024-07-15 14:03:18.092863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.549 14:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:23.549 14:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:23.549 14:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:23.549 14:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:23.549 14:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.549 14:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:23.549 14:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.549 14:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:23.549 14:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.549 14:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:23.549 14:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.549 14:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:23.549 14:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.549 14:03:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:24.484 [2024-07-15 14:03:19.297470] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:24.484 [2024-07-15 14:03:19.297511] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:24.484 [2024-07-15 14:03:19.297532] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:24.741 [2024-07-15 14:03:19.424946] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:24.741 [2024-07-15 14:03:19.488249] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:24.741 [2024-07-15 14:03:19.488309] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:24.741 [2024-07-15 14:03:19.488346] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:24.741 [2024-07-15 14:03:19.488367] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:24.741 [2024-07-15 14:03:19.488399] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:24.741 14:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.741 14:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:24.741 14:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:24.741 14:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:24.741 14:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:24.741 14:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.741 14:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:24.741 14:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:24.741 14:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:24.741 [2024-07-15 14:03:19.495514] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x230be00 was disconnected and freed. delete nvme_qpair. 00:24:24.741 14:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.741 14:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:24.741 14:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:24.741 14:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:25.000 14:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:25.000 14:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:25.000 14:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:25.000 14:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:25.000 14:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.000 14:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:25.000 14:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:25.000 14:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:25.000 14:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.000 14:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:25.000 14:03:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:25.938 14:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:25.938 14:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:25.938 14:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:25.938 14:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.938 14:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:25.938 14:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:25.938 14:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:25.938 14:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.938 14:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:25.938 14:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:26.873 14:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:26.873 14:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:26.873 14:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:26.873 14:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.873 14:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:26.873 14:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:26.873 14:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:26.873 14:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.132 14:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:27.132 14:03:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:28.070 14:03:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:28.070 14:03:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:28.070 14:03:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:28.070 14:03:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.070 14:03:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:28.070 14:03:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:28.070 14:03:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:28.070 14:03:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.070 14:03:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:28.070 14:03:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:29.008 14:03:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:29.008 14:03:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:29.008 14:03:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:29.008 14:03:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.008 14:03:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:29.008 14:03:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:29.008 14:03:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:29.008 14:03:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.008 14:03:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:29.008 14:03:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:30.384 14:03:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:30.384 14:03:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:30.384 14:03:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.384 14:03:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:30.384 14:03:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:30.384 14:03:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:30.384 14:03:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:30.384 14:03:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.384 14:03:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:30.384 14:03:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:30.384 [2024-07-15 14:03:24.929551] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:30.384 [2024-07-15 14:03:24.929616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.384 [2024-07-15 14:03:24.929635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.385 [2024-07-15 14:03:24.929650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.385 [2024-07-15 14:03:24.929662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.385 [2024-07-15 14:03:24.929675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.385 [2024-07-15 14:03:24.929688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.385 [2024-07-15 14:03:24.929701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.385 [2024-07-15 14:03:24.929735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.385 [2024-07-15 14:03:24.929757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.385 [2024-07-15 14:03:24.929770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.385 [2024-07-15 14:03:24.929782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2870 is same with the state(5) to be set 00:24:30.385 [2024-07-15 14:03:24.939571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2870 (9): Bad file descriptor 00:24:30.385 [2024-07-15 14:03:24.949613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:31.369 14:03:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:31.369 14:03:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:31.369 14:03:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:31.369 14:03:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.369 14:03:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:31.369 14:03:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:31.369 14:03:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:31.369 [2024-07-15 14:03:25.959766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:31.369 [2024-07-15 14:03:25.959815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d2870 with addr=10.0.0.2, port=4420 00:24:31.369 [2024-07-15 14:03:25.959835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d2870 is same with the state(5) to be set 00:24:31.369 [2024-07-15 14:03:25.959870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d2870 (9): Bad file descriptor 00:24:31.369 [2024-07-15 14:03:25.960262] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:31.369 [2024-07-15 14:03:25.960292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:31.369 [2024-07-15 14:03:25.960307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:31.369 [2024-07-15 14:03:25.960322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:31.369 [2024-07-15 14:03:25.960347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.369 [2024-07-15 14:03:25.960363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:31.369 14:03:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.369 14:03:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:31.369 14:03:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:32.305 [2024-07-15 14:03:26.962872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:32.305 [2024-07-15 14:03:26.962942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:32.305 [2024-07-15 14:03:26.962958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:32.305 [2024-07-15 14:03:26.962972] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:24:32.305 [2024-07-15 14:03:26.963013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.305 [2024-07-15 14:03:26.963064] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:32.305 [2024-07-15 14:03:26.963143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.305 [2024-07-15 14:03:26.963163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.305 [2024-07-15 14:03:26.963182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.305 [2024-07-15 14:03:26.963194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.305 [2024-07-15 14:03:26.963208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.305 [2024-07-15 14:03:26.963220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.305 [2024-07-15 14:03:26.963232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.305 [2024-07-15 14:03:26.963247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.305 [2024-07-15 14:03:26.963262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.305 [2024-07-15 14:03:26.963275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.305 [2024-07-15 14:03:26.963287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:32.305 [2024-07-15 14:03:26.963384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d1cf0 (9): Bad file descriptor 00:24:32.305 [2024-07-15 14:03:26.964368] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:32.305 [2024-07-15 14:03:26.964388] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:32.305 14:03:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:32.305 14:03:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:32.305 14:03:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:32.305 14:03:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.305 14:03:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:32.305 14:03:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.305 14:03:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:32.305 14:03:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.305 14:03:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:32.305 14:03:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:32.305 14:03:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:32.305 14:03:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:32.305 14:03:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:32.305 14:03:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:32.305 14:03:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:32.305 14:03:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.305 14:03:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:32.305 14:03:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.305 14:03:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:32.305 14:03:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.305 14:03:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:32.305 14:03:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:33.685 14:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:33.685 14:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:33.685 14:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.685 14:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:33.685 14:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:33.685 14:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:33.685 14:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:33.685 14:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.685 14:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:33.685 14:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:34.253 [2024-07-15 14:03:29.022885] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:34.253 [2024-07-15 14:03:29.022925] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:34.253 [2024-07-15 14:03:29.022947] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:34.513 [2024-07-15 14:03:29.109226] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:34.513 14:03:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:34.513 14:03:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:34.513 14:03:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.513 14:03:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:34.513 14:03:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.513 14:03:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:34.513 14:03:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:34.513 14:03:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.513 [2024-07-15 14:03:29.173879] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:34.513 [2024-07-15 14:03:29.173927] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:34.513 [2024-07-15 14:03:29.173961] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:34.513 [2024-07-15 14:03:29.173983] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:34.513 [2024-07-15 14:03:29.173996] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:34.513 [2024-07-15 14:03:29.180757] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2315800 was disconnected and freed. delete nvme_qpair. 00:24:34.513 14:03:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:34.513 14:03:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:35.450 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:35.450 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:35.450 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:35.450 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.450 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:35.450 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:35.450 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:35.450 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.450 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:35.450 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:35.450 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3835120 00:24:35.450 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3835120 ']' 00:24:35.450 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3835120 00:24:35.450 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:24:35.450 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:35.450 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3835120 00:24:35.450 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:35.450 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:35.450 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3835120' 00:24:35.450 killing process with pid 3835120 00:24:35.450 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3835120 00:24:35.450 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3835120 00:24:35.708 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:35.708 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:35.708 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:24:35.708 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:35.708 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:24:35.708 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:35.708 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:35.708 rmmod nvme_tcp 00:24:35.708 rmmod nvme_fabrics 00:24:35.968 rmmod nvme_keyring 00:24:35.968 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:35.968 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:24:35.968 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:24:35.968 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3834979 ']' 00:24:35.968 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3834979 00:24:35.968 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3834979 ']' 00:24:35.968 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3834979 00:24:35.968 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:24:35.968 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:35.968 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3834979 00:24:35.968 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:35.968 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:35.968 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3834979' 00:24:35.968 killing process with pid 3834979 00:24:35.968 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3834979 00:24:35.968 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3834979 00:24:36.228 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:36.228 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:36.228 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:36.228 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:36.228 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:36.228 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.228 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:36.228 14:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.128 14:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:38.128 00:24:38.128 real 0m17.799s 00:24:38.128 user 0m25.630s 00:24:38.128 sys 0m3.121s 00:24:38.128 14:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:38.128 14:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.128 ************************************ 00:24:38.128 END TEST nvmf_discovery_remove_ifc 00:24:38.128 ************************************ 00:24:38.128 14:03:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:38.128 14:03:32 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:38.128 14:03:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:38.128 14:03:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:38.128 14:03:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:38.386 ************************************ 00:24:38.386 START TEST nvmf_identify_kernel_target 00:24:38.386 ************************************ 00:24:38.386 14:03:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:38.386 * Looking for test storage... 00:24:38.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:38.386 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:38.386 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:38.386 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:38.386 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:38.386 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:38.386 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:38.386 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:38.386 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:38.386 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:38.386 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:38.386 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:38.386 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:38.386 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:38.386 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:38.386 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:38.386 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:24:38.387 14:03:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:40.924 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:40.924 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:40.924 Found net devices under 0000:84:00.0: cvl_0_0 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:40.924 Found net devices under 0000:84:00.1: cvl_0_1 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:40.924 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:40.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:40.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:24:40.925 00:24:40.925 --- 10.0.0.2 ping statistics --- 00:24:40.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.925 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:40.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:40.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:24:40.925 00:24:40.925 --- 10.0.0.1 ping statistics --- 00:24:40.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.925 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:40.925 14:03:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:41.865 Waiting for block devices as requested 00:24:41.865 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:24:41.865 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:42.124 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:42.124 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:42.124 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:42.124 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:42.382 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:42.382 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:42.382 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:42.641 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:42.641 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:42.641 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:42.641 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:42.900 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:42.900 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:42.900 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:42.900 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:43.158 14:03:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:43.158 14:03:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:43.158 14:03:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:43.158 14:03:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:43.158 14:03:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:43.158 14:03:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:43.158 14:03:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:43.158 14:03:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:43.158 14:03:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:43.158 No valid GPT data, bailing 00:24:43.158 14:03:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:43.158 14:03:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:43.158 14:03:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:43.158 14:03:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:43.158 14:03:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:43.158 14:03:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:43.158 14:03:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:43.158 14:03:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:43.158 14:03:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:43.158 14:03:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:24:43.158 14:03:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:43.158 14:03:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:24:43.158 14:03:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:43.158 14:03:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:24:43.158 14:03:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:24:43.158 14:03:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:24:43.158 14:03:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:43.158 14:03:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:24:43.158 00:24:43.158 Discovery Log Number of Records 2, Generation counter 2 00:24:43.158 =====Discovery Log Entry 0====== 00:24:43.158 trtype: tcp 00:24:43.158 adrfam: ipv4 00:24:43.158 subtype: current discovery subsystem 00:24:43.158 treq: not specified, sq flow control disable supported 00:24:43.158 portid: 1 00:24:43.158 trsvcid: 4420 00:24:43.158 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:43.158 traddr: 10.0.0.1 00:24:43.158 eflags: none 00:24:43.158 sectype: none 00:24:43.158 =====Discovery Log Entry 1====== 00:24:43.158 trtype: tcp 00:24:43.158 adrfam: ipv4 00:24:43.158 subtype: nvme subsystem 00:24:43.158 treq: not specified, sq flow control disable supported 00:24:43.158 portid: 1 00:24:43.158 trsvcid: 4420 00:24:43.158 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:43.158 traddr: 10.0.0.1 00:24:43.158 eflags: none 00:24:43.158 sectype: none 00:24:43.158 14:03:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:43.158 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:43.158 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.417 ===================================================== 00:24:43.417 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:43.417 ===================================================== 00:24:43.417 Controller Capabilities/Features 00:24:43.417 ================================ 00:24:43.417 Vendor ID: 0000 00:24:43.417 Subsystem Vendor ID: 0000 00:24:43.417 Serial Number: 82d024bb564a591283b5 00:24:43.417 Model Number: Linux 00:24:43.417 Firmware Version: 6.7.0-68 00:24:43.417 Recommended Arb Burst: 0 00:24:43.417 IEEE OUI Identifier: 00 00 00 00:24:43.417 Multi-path I/O 00:24:43.417 May have multiple subsystem ports: No 00:24:43.417 May have multiple controllers: No 00:24:43.417 Associated with SR-IOV VF: No 00:24:43.417 Max Data Transfer Size: Unlimited 00:24:43.417 Max Number of Namespaces: 0 00:24:43.417 Max Number of I/O Queues: 1024 00:24:43.417 NVMe Specification Version (VS): 1.3 00:24:43.417 NVMe Specification Version (Identify): 1.3 00:24:43.417 Maximum Queue Entries: 1024 00:24:43.417 Contiguous Queues Required: No 00:24:43.417 Arbitration Mechanisms Supported 00:24:43.417 Weighted Round Robin: Not Supported 00:24:43.417 Vendor Specific: Not Supported 00:24:43.417 Reset Timeout: 7500 ms 00:24:43.417 Doorbell Stride: 4 bytes 00:24:43.417 NVM Subsystem Reset: Not Supported 00:24:43.417 Command Sets Supported 00:24:43.417 NVM Command Set: Supported 00:24:43.417 Boot Partition: Not Supported 00:24:43.417 Memory Page Size Minimum: 4096 bytes 00:24:43.417 Memory Page Size Maximum: 4096 bytes 00:24:43.417 Persistent Memory Region: Not Supported 00:24:43.417 Optional Asynchronous Events Supported 00:24:43.417 Namespace Attribute Notices: Not Supported 00:24:43.417 Firmware Activation Notices: Not Supported 00:24:43.417 ANA Change Notices: Not Supported 00:24:43.417 PLE Aggregate Log Change Notices: Not Supported 00:24:43.417 LBA Status Info Alert Notices: Not Supported 00:24:43.417 EGE Aggregate Log Change Notices: Not Supported 00:24:43.417 Normal NVM Subsystem Shutdown event: Not Supported 00:24:43.417 Zone Descriptor Change Notices: Not Supported 00:24:43.417 Discovery Log Change Notices: Supported 00:24:43.418 Controller Attributes 00:24:43.418 128-bit Host Identifier: Not Supported 00:24:43.418 Non-Operational Permissive Mode: Not Supported 00:24:43.418 NVM Sets: Not Supported 00:24:43.418 Read Recovery Levels: Not Supported 00:24:43.418 Endurance Groups: Not Supported 00:24:43.418 Predictable Latency Mode: Not Supported 00:24:43.418 Traffic Based Keep ALive: Not Supported 00:24:43.418 Namespace Granularity: Not Supported 00:24:43.418 SQ Associations: Not Supported 00:24:43.418 UUID List: Not Supported 00:24:43.418 Multi-Domain Subsystem: Not Supported 00:24:43.418 Fixed Capacity Management: Not Supported 00:24:43.418 Variable Capacity Management: Not Supported 00:24:43.418 Delete Endurance Group: Not Supported 00:24:43.418 Delete NVM Set: Not Supported 00:24:43.418 Extended LBA Formats Supported: Not Supported 00:24:43.418 Flexible Data Placement Supported: Not Supported 00:24:43.418 00:24:43.418 Controller Memory Buffer Support 00:24:43.418 ================================ 00:24:43.418 Supported: No 00:24:43.418 00:24:43.418 Persistent Memory Region Support 00:24:43.418 ================================ 00:24:43.418 Supported: No 00:24:43.418 00:24:43.418 Admin Command Set Attributes 00:24:43.418 ============================ 00:24:43.418 Security Send/Receive: Not Supported 00:24:43.418 Format NVM: Not Supported 00:24:43.418 Firmware Activate/Download: Not Supported 00:24:43.418 Namespace Management: Not Supported 00:24:43.418 Device Self-Test: Not Supported 00:24:43.418 Directives: Not Supported 00:24:43.418 NVMe-MI: Not Supported 00:24:43.418 Virtualization Management: Not Supported 00:24:43.418 Doorbell Buffer Config: Not Supported 00:24:43.418 Get LBA Status Capability: Not Supported 00:24:43.418 Command & Feature Lockdown Capability: Not Supported 00:24:43.418 Abort Command Limit: 1 00:24:43.418 Async Event Request Limit: 1 00:24:43.418 Number of Firmware Slots: N/A 00:24:43.418 Firmware Slot 1 Read-Only: N/A 00:24:43.418 Firmware Activation Without Reset: N/A 00:24:43.418 Multiple Update Detection Support: N/A 00:24:43.418 Firmware Update Granularity: No Information Provided 00:24:43.418 Per-Namespace SMART Log: No 00:24:43.418 Asymmetric Namespace Access Log Page: Not Supported 00:24:43.418 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:43.418 Command Effects Log Page: Not Supported 00:24:43.418 Get Log Page Extended Data: Supported 00:24:43.418 Telemetry Log Pages: Not Supported 00:24:43.418 Persistent Event Log Pages: Not Supported 00:24:43.418 Supported Log Pages Log Page: May Support 00:24:43.418 Commands Supported & Effects Log Page: Not Supported 00:24:43.418 Feature Identifiers & Effects Log Page:May Support 00:24:43.418 NVMe-MI Commands & Effects Log Page: May Support 00:24:43.418 Data Area 4 for Telemetry Log: Not Supported 00:24:43.418 Error Log Page Entries Supported: 1 00:24:43.418 Keep Alive: Not Supported 00:24:43.418 00:24:43.418 NVM Command Set Attributes 00:24:43.418 ========================== 00:24:43.418 Submission Queue Entry Size 00:24:43.418 Max: 1 00:24:43.418 Min: 1 00:24:43.418 Completion Queue Entry Size 00:24:43.418 Max: 1 00:24:43.418 Min: 1 00:24:43.418 Number of Namespaces: 0 00:24:43.418 Compare Command: Not Supported 00:24:43.418 Write Uncorrectable Command: Not Supported 00:24:43.418 Dataset Management Command: Not Supported 00:24:43.418 Write Zeroes Command: Not Supported 00:24:43.418 Set Features Save Field: Not Supported 00:24:43.418 Reservations: Not Supported 00:24:43.418 Timestamp: Not Supported 00:24:43.418 Copy: Not Supported 00:24:43.418 Volatile Write Cache: Not Present 00:24:43.418 Atomic Write Unit (Normal): 1 00:24:43.418 Atomic Write Unit (PFail): 1 00:24:43.418 Atomic Compare & Write Unit: 1 00:24:43.418 Fused Compare & Write: Not Supported 00:24:43.418 Scatter-Gather List 00:24:43.418 SGL Command Set: Supported 00:24:43.418 SGL Keyed: Not Supported 00:24:43.418 SGL Bit Bucket Descriptor: Not Supported 00:24:43.418 SGL Metadata Pointer: Not Supported 00:24:43.418 Oversized SGL: Not Supported 00:24:43.418 SGL Metadata Address: Not Supported 00:24:43.418 SGL Offset: Supported 00:24:43.418 Transport SGL Data Block: Not Supported 00:24:43.418 Replay Protected Memory Block: Not Supported 00:24:43.418 00:24:43.418 Firmware Slot Information 00:24:43.418 ========================= 00:24:43.418 Active slot: 0 00:24:43.418 00:24:43.418 00:24:43.418 Error Log 00:24:43.418 ========= 00:24:43.418 00:24:43.418 Active Namespaces 00:24:43.418 ================= 00:24:43.418 Discovery Log Page 00:24:43.418 ================== 00:24:43.418 Generation Counter: 2 00:24:43.418 Number of Records: 2 00:24:43.418 Record Format: 0 00:24:43.418 00:24:43.418 Discovery Log Entry 0 00:24:43.418 ---------------------- 00:24:43.418 Transport Type: 3 (TCP) 00:24:43.418 Address Family: 1 (IPv4) 00:24:43.418 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:43.418 Entry Flags: 00:24:43.418 Duplicate Returned Information: 0 00:24:43.418 Explicit Persistent Connection Support for Discovery: 0 00:24:43.418 Transport Requirements: 00:24:43.418 Secure Channel: Not Specified 00:24:43.418 Port ID: 1 (0x0001) 00:24:43.418 Controller ID: 65535 (0xffff) 00:24:43.418 Admin Max SQ Size: 32 00:24:43.418 Transport Service Identifier: 4420 00:24:43.418 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:43.418 Transport Address: 10.0.0.1 00:24:43.418 Discovery Log Entry 1 00:24:43.418 ---------------------- 00:24:43.418 Transport Type: 3 (TCP) 00:24:43.418 Address Family: 1 (IPv4) 00:24:43.418 Subsystem Type: 2 (NVM Subsystem) 00:24:43.418 Entry Flags: 00:24:43.418 Duplicate Returned Information: 0 00:24:43.418 Explicit Persistent Connection Support for Discovery: 0 00:24:43.418 Transport Requirements: 00:24:43.418 Secure Channel: Not Specified 00:24:43.418 Port ID: 1 (0x0001) 00:24:43.418 Controller ID: 65535 (0xffff) 00:24:43.418 Admin Max SQ Size: 32 00:24:43.418 Transport Service Identifier: 4420 00:24:43.418 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:43.418 Transport Address: 10.0.0.1 00:24:43.419 14:03:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:43.419 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.419 get_feature(0x01) failed 00:24:43.419 get_feature(0x02) failed 00:24:43.419 get_feature(0x04) failed 00:24:43.419 ===================================================== 00:24:43.419 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:43.419 ===================================================== 00:24:43.419 Controller Capabilities/Features 00:24:43.419 ================================ 00:24:43.419 Vendor ID: 0000 00:24:43.419 Subsystem Vendor ID: 0000 00:24:43.419 Serial Number: 43eac29a0afe4bdb59a6 00:24:43.419 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:43.419 Firmware Version: 6.7.0-68 00:24:43.419 Recommended Arb Burst: 6 00:24:43.419 IEEE OUI Identifier: 00 00 00 00:24:43.419 Multi-path I/O 00:24:43.419 May have multiple subsystem ports: Yes 00:24:43.419 May have multiple controllers: Yes 00:24:43.419 Associated with SR-IOV VF: No 00:24:43.419 Max Data Transfer Size: Unlimited 00:24:43.419 Max Number of Namespaces: 1024 00:24:43.419 Max Number of I/O Queues: 128 00:24:43.419 NVMe Specification Version (VS): 1.3 00:24:43.419 NVMe Specification Version (Identify): 1.3 00:24:43.419 Maximum Queue Entries: 1024 00:24:43.419 Contiguous Queues Required: No 00:24:43.419 Arbitration Mechanisms Supported 00:24:43.419 Weighted Round Robin: Not Supported 00:24:43.419 Vendor Specific: Not Supported 00:24:43.419 Reset Timeout: 7500 ms 00:24:43.419 Doorbell Stride: 4 bytes 00:24:43.419 NVM Subsystem Reset: Not Supported 00:24:43.419 Command Sets Supported 00:24:43.419 NVM Command Set: Supported 00:24:43.419 Boot Partition: Not Supported 00:24:43.419 Memory Page Size Minimum: 4096 bytes 00:24:43.419 Memory Page Size Maximum: 4096 bytes 00:24:43.419 Persistent Memory Region: Not Supported 00:24:43.419 Optional Asynchronous Events Supported 00:24:43.419 Namespace Attribute Notices: Supported 00:24:43.419 Firmware Activation Notices: Not Supported 00:24:43.419 ANA Change Notices: Supported 00:24:43.419 PLE Aggregate Log Change Notices: Not Supported 00:24:43.419 LBA Status Info Alert Notices: Not Supported 00:24:43.419 EGE Aggregate Log Change Notices: Not Supported 00:24:43.419 Normal NVM Subsystem Shutdown event: Not Supported 00:24:43.419 Zone Descriptor Change Notices: Not Supported 00:24:43.419 Discovery Log Change Notices: Not Supported 00:24:43.419 Controller Attributes 00:24:43.419 128-bit Host Identifier: Supported 00:24:43.419 Non-Operational Permissive Mode: Not Supported 00:24:43.419 NVM Sets: Not Supported 00:24:43.419 Read Recovery Levels: Not Supported 00:24:43.419 Endurance Groups: Not Supported 00:24:43.419 Predictable Latency Mode: Not Supported 00:24:43.419 Traffic Based Keep ALive: Supported 00:24:43.419 Namespace Granularity: Not Supported 00:24:43.419 SQ Associations: Not Supported 00:24:43.419 UUID List: Not Supported 00:24:43.419 Multi-Domain Subsystem: Not Supported 00:24:43.419 Fixed Capacity Management: Not Supported 00:24:43.419 Variable Capacity Management: Not Supported 00:24:43.419 Delete Endurance Group: Not Supported 00:24:43.419 Delete NVM Set: Not Supported 00:24:43.419 Extended LBA Formats Supported: Not Supported 00:24:43.419 Flexible Data Placement Supported: Not Supported 00:24:43.419 00:24:43.419 Controller Memory Buffer Support 00:24:43.419 ================================ 00:24:43.419 Supported: No 00:24:43.419 00:24:43.419 Persistent Memory Region Support 00:24:43.419 ================================ 00:24:43.419 Supported: No 00:24:43.419 00:24:43.419 Admin Command Set Attributes 00:24:43.419 ============================ 00:24:43.419 Security Send/Receive: Not Supported 00:24:43.419 Format NVM: Not Supported 00:24:43.419 Firmware Activate/Download: Not Supported 00:24:43.419 Namespace Management: Not Supported 00:24:43.419 Device Self-Test: Not Supported 00:24:43.419 Directives: Not Supported 00:24:43.419 NVMe-MI: Not Supported 00:24:43.419 Virtualization Management: Not Supported 00:24:43.419 Doorbell Buffer Config: Not Supported 00:24:43.419 Get LBA Status Capability: Not Supported 00:24:43.419 Command & Feature Lockdown Capability: Not Supported 00:24:43.419 Abort Command Limit: 4 00:24:43.419 Async Event Request Limit: 4 00:24:43.419 Number of Firmware Slots: N/A 00:24:43.419 Firmware Slot 1 Read-Only: N/A 00:24:43.419 Firmware Activation Without Reset: N/A 00:24:43.419 Multiple Update Detection Support: N/A 00:24:43.419 Firmware Update Granularity: No Information Provided 00:24:43.419 Per-Namespace SMART Log: Yes 00:24:43.419 Asymmetric Namespace Access Log Page: Supported 00:24:43.419 ANA Transition Time : 10 sec 00:24:43.419 00:24:43.419 Asymmetric Namespace Access Capabilities 00:24:43.419 ANA Optimized State : Supported 00:24:43.419 ANA Non-Optimized State : Supported 00:24:43.419 ANA Inaccessible State : Supported 00:24:43.419 ANA Persistent Loss State : Supported 00:24:43.419 ANA Change State : Supported 00:24:43.419 ANAGRPID is not changed : No 00:24:43.419 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:43.419 00:24:43.419 ANA Group Identifier Maximum : 128 00:24:43.419 Number of ANA Group Identifiers : 128 00:24:43.419 Max Number of Allowed Namespaces : 1024 00:24:43.419 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:43.419 Command Effects Log Page: Supported 00:24:43.419 Get Log Page Extended Data: Supported 00:24:43.419 Telemetry Log Pages: Not Supported 00:24:43.419 Persistent Event Log Pages: Not Supported 00:24:43.419 Supported Log Pages Log Page: May Support 00:24:43.419 Commands Supported & Effects Log Page: Not Supported 00:24:43.419 Feature Identifiers & Effects Log Page:May Support 00:24:43.419 NVMe-MI Commands & Effects Log Page: May Support 00:24:43.419 Data Area 4 for Telemetry Log: Not Supported 00:24:43.419 Error Log Page Entries Supported: 128 00:24:43.419 Keep Alive: Supported 00:24:43.419 Keep Alive Granularity: 1000 ms 00:24:43.419 00:24:43.419 NVM Command Set Attributes 00:24:43.419 ========================== 00:24:43.419 Submission Queue Entry Size 00:24:43.419 Max: 64 00:24:43.419 Min: 64 00:24:43.419 Completion Queue Entry Size 00:24:43.419 Max: 16 00:24:43.419 Min: 16 00:24:43.419 Number of Namespaces: 1024 00:24:43.419 Compare Command: Not Supported 00:24:43.419 Write Uncorrectable Command: Not Supported 00:24:43.419 Dataset Management Command: Supported 00:24:43.419 Write Zeroes Command: Supported 00:24:43.419 Set Features Save Field: Not Supported 00:24:43.419 Reservations: Not Supported 00:24:43.419 Timestamp: Not Supported 00:24:43.419 Copy: Not Supported 00:24:43.419 Volatile Write Cache: Present 00:24:43.420 Atomic Write Unit (Normal): 1 00:24:43.420 Atomic Write Unit (PFail): 1 00:24:43.420 Atomic Compare & Write Unit: 1 00:24:43.420 Fused Compare & Write: Not Supported 00:24:43.420 Scatter-Gather List 00:24:43.420 SGL Command Set: Supported 00:24:43.420 SGL Keyed: Not Supported 00:24:43.420 SGL Bit Bucket Descriptor: Not Supported 00:24:43.420 SGL Metadata Pointer: Not Supported 00:24:43.420 Oversized SGL: Not Supported 00:24:43.420 SGL Metadata Address: Not Supported 00:24:43.420 SGL Offset: Supported 00:24:43.420 Transport SGL Data Block: Not Supported 00:24:43.420 Replay Protected Memory Block: Not Supported 00:24:43.420 00:24:43.420 Firmware Slot Information 00:24:43.420 ========================= 00:24:43.420 Active slot: 0 00:24:43.420 00:24:43.420 Asymmetric Namespace Access 00:24:43.420 =========================== 00:24:43.420 Change Count : 0 00:24:43.420 Number of ANA Group Descriptors : 1 00:24:43.420 ANA Group Descriptor : 0 00:24:43.420 ANA Group ID : 1 00:24:43.420 Number of NSID Values : 1 00:24:43.420 Change Count : 0 00:24:43.420 ANA State : 1 00:24:43.420 Namespace Identifier : 1 00:24:43.420 00:24:43.420 Commands Supported and Effects 00:24:43.420 ============================== 00:24:43.420 Admin Commands 00:24:43.420 -------------- 00:24:43.420 Get Log Page (02h): Supported 00:24:43.420 Identify (06h): Supported 00:24:43.420 Abort (08h): Supported 00:24:43.420 Set Features (09h): Supported 00:24:43.420 Get Features (0Ah): Supported 00:24:43.420 Asynchronous Event Request (0Ch): Supported 00:24:43.420 Keep Alive (18h): Supported 00:24:43.420 I/O Commands 00:24:43.420 ------------ 00:24:43.420 Flush (00h): Supported 00:24:43.420 Write (01h): Supported LBA-Change 00:24:43.420 Read (02h): Supported 00:24:43.420 Write Zeroes (08h): Supported LBA-Change 00:24:43.420 Dataset Management (09h): Supported 00:24:43.420 00:24:43.420 Error Log 00:24:43.420 ========= 00:24:43.420 Entry: 0 00:24:43.420 Error Count: 0x3 00:24:43.420 Submission Queue Id: 0x0 00:24:43.420 Command Id: 0x5 00:24:43.420 Phase Bit: 0 00:24:43.420 Status Code: 0x2 00:24:43.420 Status Code Type: 0x0 00:24:43.420 Do Not Retry: 1 00:24:43.420 Error Location: 0x28 00:24:43.420 LBA: 0x0 00:24:43.420 Namespace: 0x0 00:24:43.420 Vendor Log Page: 0x0 00:24:43.420 ----------- 00:24:43.420 Entry: 1 00:24:43.420 Error Count: 0x2 00:24:43.420 Submission Queue Id: 0x0 00:24:43.420 Command Id: 0x5 00:24:43.420 Phase Bit: 0 00:24:43.420 Status Code: 0x2 00:24:43.420 Status Code Type: 0x0 00:24:43.420 Do Not Retry: 1 00:24:43.420 Error Location: 0x28 00:24:43.420 LBA: 0x0 00:24:43.420 Namespace: 0x0 00:24:43.420 Vendor Log Page: 0x0 00:24:43.420 ----------- 00:24:43.420 Entry: 2 00:24:43.420 Error Count: 0x1 00:24:43.420 Submission Queue Id: 0x0 00:24:43.420 Command Id: 0x4 00:24:43.420 Phase Bit: 0 00:24:43.420 Status Code: 0x2 00:24:43.420 Status Code Type: 0x0 00:24:43.420 Do Not Retry: 1 00:24:43.420 Error Location: 0x28 00:24:43.420 LBA: 0x0 00:24:43.420 Namespace: 0x0 00:24:43.420 Vendor Log Page: 0x0 00:24:43.420 00:24:43.420 Number of Queues 00:24:43.420 ================ 00:24:43.420 Number of I/O Submission Queues: 128 00:24:43.420 Number of I/O Completion Queues: 128 00:24:43.420 00:24:43.420 ZNS Specific Controller Data 00:24:43.420 ============================ 00:24:43.420 Zone Append Size Limit: 0 00:24:43.420 00:24:43.420 00:24:43.420 Active Namespaces 00:24:43.420 ================= 00:24:43.420 get_feature(0x05) failed 00:24:43.420 Namespace ID:1 00:24:43.420 Command Set Identifier: NVM (00h) 00:24:43.420 Deallocate: Supported 00:24:43.420 Deallocated/Unwritten Error: Not Supported 00:24:43.420 Deallocated Read Value: Unknown 00:24:43.420 Deallocate in Write Zeroes: Not Supported 00:24:43.420 Deallocated Guard Field: 0xFFFF 00:24:43.420 Flush: Supported 00:24:43.420 Reservation: Not Supported 00:24:43.420 Namespace Sharing Capabilities: Multiple Controllers 00:24:43.420 Size (in LBAs): 1953525168 (931GiB) 00:24:43.420 Capacity (in LBAs): 1953525168 (931GiB) 00:24:43.420 Utilization (in LBAs): 1953525168 (931GiB) 00:24:43.420 UUID: 1795c932-377a-4f68-82d7-8692a971dadd 00:24:43.420 Thin Provisioning: Not Supported 00:24:43.420 Per-NS Atomic Units: Yes 00:24:43.420 Atomic Boundary Size (Normal): 0 00:24:43.420 Atomic Boundary Size (PFail): 0 00:24:43.420 Atomic Boundary Offset: 0 00:24:43.420 NGUID/EUI64 Never Reused: No 00:24:43.420 ANA group ID: 1 00:24:43.420 Namespace Write Protected: No 00:24:43.420 Number of LBA Formats: 1 00:24:43.420 Current LBA Format: LBA Format #00 00:24:43.420 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:43.420 00:24:43.420 14:03:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:43.420 14:03:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:43.420 14:03:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:24:43.420 14:03:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:43.420 14:03:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:24:43.420 14:03:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:43.420 14:03:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:43.420 rmmod nvme_tcp 00:24:43.420 rmmod nvme_fabrics 00:24:43.420 14:03:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:43.420 14:03:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:24:43.420 14:03:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:24:43.420 14:03:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:43.420 14:03:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:43.420 14:03:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:43.420 14:03:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:43.420 14:03:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:43.420 14:03:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:43.420 14:03:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.420 14:03:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:43.420 14:03:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.954 14:03:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:45.954 14:03:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:45.954 14:03:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:45.954 14:03:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:24:45.954 14:03:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:45.954 14:03:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:45.954 14:03:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:45.954 14:03:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:45.954 14:03:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:45.954 14:03:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:45.954 14:03:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:46.891 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:46.891 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:46.891 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:46.891 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:46.891 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:46.891 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:46.891 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:46.891 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:46.891 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:46.891 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:46.891 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:46.891 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:46.891 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:46.891 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:46.891 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:46.891 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:47.826 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:24:47.826 00:24:47.826 real 0m9.637s 00:24:47.826 user 0m2.059s 00:24:47.826 sys 0m3.579s 00:24:47.826 14:03:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:47.826 14:03:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:47.826 ************************************ 00:24:47.826 END TEST nvmf_identify_kernel_target 00:24:47.826 ************************************ 00:24:47.826 14:03:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:47.826 14:03:42 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:47.826 14:03:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:47.826 14:03:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:47.826 14:03:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:48.084 ************************************ 00:24:48.084 START TEST nvmf_auth_host 00:24:48.084 ************************************ 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:48.084 * Looking for test storage... 00:24:48.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:48.084 14:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:48.085 14:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:48.085 14:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:48.085 14:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:48.085 14:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:48.085 14:03:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:48.085 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:48.085 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:48.085 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:48.085 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:48.085 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:48.085 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.085 14:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:48.085 14:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.085 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:48.085 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:48.085 14:03:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:48.085 14:03:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:50.616 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:50.616 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:50.616 Found net devices under 0000:84:00.0: cvl_0_0 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:50.616 Found net devices under 0000:84:00.1: cvl_0_1 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:50.616 14:03:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:50.616 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:50.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:50.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:24:50.616 00:24:50.616 --- 10.0.0.2 ping statistics --- 00:24:50.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.616 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:24:50.616 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:50.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:50.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:24:50.616 00:24:50.616 --- 10.0.0.1 ping statistics --- 00:24:50.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.616 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:24:50.616 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:50.616 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:24:50.616 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:50.616 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:50.616 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:50.616 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:50.616 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:50.616 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:50.616 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:50.616 14:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:50.616 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:50.616 14:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:50.616 14:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.616 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3842238 00:24:50.616 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:50.616 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3842238 00:24:50.616 14:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3842238 ']' 00:24:50.616 14:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.616 14:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f1c4408c760df7b6517330927c39d4df 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.eYk 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f1c4408c760df7b6517330927c39d4df 0 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f1c4408c760df7b6517330927c39d4df 0 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f1c4408c760df7b6517330927c39d4df 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.eYk 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.eYk 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.eYk 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=706e457209f9f89c2b2f5775df78814610a5cf41ebae140bafdc852a81b4f247 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.a9K 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 706e457209f9f89c2b2f5775df78814610a5cf41ebae140bafdc852a81b4f247 3 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 706e457209f9f89c2b2f5775df78814610a5cf41ebae140bafdc852a81b4f247 3 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=706e457209f9f89c2b2f5775df78814610a5cf41ebae140bafdc852a81b4f247 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:50.617 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.a9K 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.a9K 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.a9K 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=baffb090fd194a89078d2f4ce7b396f99d5f50f7714da8f1 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.vzK 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key baffb090fd194a89078d2f4ce7b396f99d5f50f7714da8f1 0 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 baffb090fd194a89078d2f4ce7b396f99d5f50f7714da8f1 0 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=baffb090fd194a89078d2f4ce7b396f99d5f50f7714da8f1 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.vzK 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.vzK 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.vzK 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=be685112a3afae58e45b741a72df1e98b23964105baa0b1e 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:50.875 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.dqe 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key be685112a3afae58e45b741a72df1e98b23964105baa0b1e 2 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 be685112a3afae58e45b741a72df1e98b23964105baa0b1e 2 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=be685112a3afae58e45b741a72df1e98b23964105baa0b1e 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.dqe 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.dqe 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.dqe 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4579ba0dddc94aae174fe5dede86e466 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.csZ 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4579ba0dddc94aae174fe5dede86e466 1 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4579ba0dddc94aae174fe5dede86e466 1 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4579ba0dddc94aae174fe5dede86e466 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.csZ 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.csZ 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.csZ 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f433c7cf881f54029b647b7a65dc7089 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Tue 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f433c7cf881f54029b647b7a65dc7089 1 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f433c7cf881f54029b647b7a65dc7089 1 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f433c7cf881f54029b647b7a65dc7089 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Tue 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Tue 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Tue 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=aab454f5a1666c6b840a95f6b38e2249c724c12e71939c50 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.bHB 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key aab454f5a1666c6b840a95f6b38e2249c724c12e71939c50 2 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 aab454f5a1666c6b840a95f6b38e2249c724c12e71939c50 2 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=aab454f5a1666c6b840a95f6b38e2249c724c12e71939c50 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.bHB 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.bHB 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.bHB 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a8d27e739475fd52b857cb766cf991af 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.HYg 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a8d27e739475fd52b857cb766cf991af 0 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a8d27e739475fd52b857cb766cf991af 0 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a8d27e739475fd52b857cb766cf991af 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:50.876 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:51.134 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.HYg 00:24:51.134 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.HYg 00:24:51.134 14:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.HYg 00:24:51.134 14:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:51.134 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:51.134 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:51.134 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:51.134 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:51.134 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:51.134 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:51.134 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e54470041055f0b03d2f9337b2d5212d7455ee4d816b4e5d2644d38cf5bfbefc 00:24:51.134 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:51.134 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.JMV 00:24:51.134 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e54470041055f0b03d2f9337b2d5212d7455ee4d816b4e5d2644d38cf5bfbefc 3 00:24:51.134 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e54470041055f0b03d2f9337b2d5212d7455ee4d816b4e5d2644d38cf5bfbefc 3 00:24:51.134 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:51.134 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:51.134 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e54470041055f0b03d2f9337b2d5212d7455ee4d816b4e5d2644d38cf5bfbefc 00:24:51.134 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:51.134 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:51.134 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.JMV 00:24:51.134 14:03:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.JMV 00:24:51.134 14:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.JMV 00:24:51.134 14:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:51.134 14:03:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3842238 00:24:51.134 14:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3842238 ']' 00:24:51.134 14:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.134 14:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:51.134 14:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.135 14:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:51.135 14:03:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.eYk 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.a9K ]] 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.a9K 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.vzK 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.dqe ]] 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.dqe 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.csZ 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Tue ]] 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Tue 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.bHB 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.HYg ]] 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.HYg 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.JMV 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:51.393 14:03:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:52.764 Waiting for block devices as requested 00:24:52.764 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:24:52.765 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:52.765 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:53.023 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:53.023 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:53.023 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:53.023 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:53.282 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:53.282 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:53.282 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:53.543 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:53.543 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:53.543 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:53.543 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:53.801 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:53.801 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:53.801 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:54.367 14:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:54.367 14:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:54.367 14:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:54.367 14:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:54.367 14:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:54.367 14:03:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:54.367 14:03:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:54.367 14:03:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:54.367 14:03:48 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:54.367 No valid GPT data, bailing 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:24:54.367 00:24:54.367 Discovery Log Number of Records 2, Generation counter 2 00:24:54.367 =====Discovery Log Entry 0====== 00:24:54.367 trtype: tcp 00:24:54.367 adrfam: ipv4 00:24:54.367 subtype: current discovery subsystem 00:24:54.367 treq: not specified, sq flow control disable supported 00:24:54.367 portid: 1 00:24:54.367 trsvcid: 4420 00:24:54.367 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:54.367 traddr: 10.0.0.1 00:24:54.367 eflags: none 00:24:54.367 sectype: none 00:24:54.367 =====Discovery Log Entry 1====== 00:24:54.367 trtype: tcp 00:24:54.367 adrfam: ipv4 00:24:54.367 subtype: nvme subsystem 00:24:54.367 treq: not specified, sq flow control disable supported 00:24:54.367 portid: 1 00:24:54.367 trsvcid: 4420 00:24:54.367 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:54.367 traddr: 10.0.0.1 00:24:54.367 eflags: none 00:24:54.367 sectype: none 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: ]] 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.367 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.626 nvme0n1 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: ]] 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.626 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.885 nvme0n1 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: ]] 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.885 nvme0n1 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.885 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: ]] 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.145 nvme0n1 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.145 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: ]] 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.405 14:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.405 nvme0n1 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.405 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.406 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.406 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:55.406 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.406 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:55.406 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:55.406 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:55.406 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:55.406 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.406 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.705 nvme0n1 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: ]] 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.705 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.706 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.706 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.706 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:55.706 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.706 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.706 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.706 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.706 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:55.706 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.706 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:55.706 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:55.706 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:55.706 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:55.706 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.706 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.964 nvme0n1 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: ]] 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.964 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.223 nvme0n1 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: ]] 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.223 14:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.223 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.223 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.223 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:56.223 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:56.223 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:56.223 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.223 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.223 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:56.223 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.223 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:56.223 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:56.223 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:56.223 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:56.223 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.223 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.481 nvme0n1 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: ]] 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.481 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.740 nvme0n1 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.740 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.042 nvme0n1 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: ]] 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.042 14:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.301 nvme0n1 00:24:57.301 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.301 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.301 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.301 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.301 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.301 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: ]] 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.559 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.817 nvme0n1 00:24:57.817 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.817 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.817 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.817 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.817 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.817 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.817 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.817 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.817 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.817 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.817 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.817 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.817 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:57.817 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.817 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:57.817 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:57.817 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: ]] 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.818 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.075 nvme0n1 00:24:58.075 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.075 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.075 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.075 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.075 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.075 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.333 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.333 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.333 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.333 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.333 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.333 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.333 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:58.333 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.333 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:58.333 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:58.333 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:58.333 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:24:58.333 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:24:58.333 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:58.333 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:58.333 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:24:58.333 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: ]] 00:24:58.333 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:24:58.333 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:58.333 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.333 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:58.333 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:58.333 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:58.333 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.333 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:58.333 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.333 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.333 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.334 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.334 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.334 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.334 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.334 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.334 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.334 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:58.334 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.334 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:58.334 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:58.334 14:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:58.334 14:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:58.334 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.334 14:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.591 nvme0n1 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.591 14:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.592 14:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:58.592 14:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.592 14:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:58.592 14:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:58.592 14:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:58.592 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:58.592 14:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.592 14:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.849 nvme0n1 00:24:58.849 14:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.849 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.849 14:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.849 14:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.849 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.849 14:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: ]] 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.106 14:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.672 nvme0n1 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: ]] 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.672 14:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.237 nvme0n1 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: ]] 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.237 14:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.802 nvme0n1 00:25:00.802 14:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.802 14:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.802 14:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.802 14:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.802 14:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.802 14:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.802 14:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.802 14:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.802 14:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.802 14:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.802 14:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.802 14:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.802 14:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:00.802 14:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.802 14:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: ]] 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.803 14:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.368 nvme0n1 00:25:01.368 14:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.368 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.368 14:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.368 14:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.368 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.368 14:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.368 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.368 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.368 14:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.368 14:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.368 14:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.368 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.368 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:01.368 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.368 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:01.369 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:01.369 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:01.369 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:25:01.369 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:01.369 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:01.369 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:01.369 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:25:01.369 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:01.369 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:01.369 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.369 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:01.369 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:01.369 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:01.369 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.369 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:01.369 14:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.369 14:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.626 14:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.626 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.626 14:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.626 14:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.626 14:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.626 14:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.626 14:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.626 14:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:01.626 14:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.626 14:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:01.626 14:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:01.626 14:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:01.626 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:01.626 14:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.626 14:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.884 nvme0n1 00:25:01.884 14:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.884 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.884 14:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.884 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.884 14:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.884 14:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.884 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.884 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.884 14:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.884 14:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: ]] 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.141 14:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.074 nvme0n1 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: ]] 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.074 14:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.007 nvme0n1 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: ]] 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.007 14:03:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.939 nvme0n1 00:25:04.939 14:03:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.939 14:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.939 14:03:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.939 14:03:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.939 14:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.939 14:03:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.939 14:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.939 14:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: ]] 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.940 14:03:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.872 nvme0n1 00:25:05.872 14:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.873 14:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.805 nvme0n1 00:25:06.805 14:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.805 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.805 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.805 14:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.806 14:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.806 14:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: ]] 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.064 nvme0n1 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.064 14:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.322 14:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.322 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.322 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:07.322 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.322 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:07.322 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:07.322 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:07.322 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:25:07.322 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:25:07.322 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:07.322 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:07.322 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:25:07.322 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: ]] 00:25:07.322 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:25:07.322 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:07.322 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.322 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:07.323 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:07.323 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:07.323 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.323 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:07.323 14:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.323 14:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.323 14:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.323 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.323 14:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.323 14:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.323 14:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.323 14:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.323 14:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.323 14:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.323 14:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.323 14:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.323 14:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.323 14:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.323 14:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:07.323 14:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.323 14:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.323 nvme0n1 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: ]] 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.323 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.582 nvme0n1 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: ]] 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.582 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.840 nvme0n1 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.840 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.098 nvme0n1 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: ]] 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.098 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.099 14:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.099 14:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:08.099 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.099 14:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.357 nvme0n1 00:25:08.357 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.357 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.357 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.357 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.357 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.357 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.357 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.357 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.357 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.357 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.357 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.357 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.357 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: ]] 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.358 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.624 nvme0n1 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: ]] 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.624 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.888 nvme0n1 00:25:08.888 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.888 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.888 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.888 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.888 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.888 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.888 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.888 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.888 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.888 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.888 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: ]] 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.889 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.147 nvme0n1 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.147 14:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.406 nvme0n1 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: ]] 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.406 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.407 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.407 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.407 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.407 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.407 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.407 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.407 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.407 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.407 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.407 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.407 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:09.407 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.407 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.665 nvme0n1 00:25:09.665 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.665 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.665 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.665 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.665 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.922 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.922 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.922 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.922 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.922 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.922 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.922 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.922 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:09.922 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.922 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:09.922 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:09.922 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:09.922 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:25:09.922 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:25:09.922 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:09.922 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:09.922 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:25:09.923 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: ]] 00:25:09.923 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:25:09.923 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:09.923 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.923 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:09.923 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:09.923 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:09.923 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.923 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:09.923 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.923 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.923 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.923 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.923 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.923 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.923 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.923 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.923 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.923 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.923 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.923 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.923 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.923 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.923 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:09.923 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.923 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.180 nvme0n1 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: ]] 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.180 14:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.437 nvme0n1 00:25:10.437 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.437 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.437 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.437 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.437 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.437 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: ]] 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.695 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.952 nvme0n1 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.952 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.210 nvme0n1 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: ]] 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.210 14:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.774 nvme0n1 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: ]] 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.774 14:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.338 nvme0n1 00:25:12.338 14:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: ]] 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.595 14:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.161 nvme0n1 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: ]] 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.161 14:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.726 nvme0n1 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:13.726 14:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.984 14:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.984 14:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.984 14:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.984 14:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.984 14:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.984 14:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.984 14:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.984 14:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.984 14:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.984 14:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.984 14:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.984 14:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.984 14:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.984 14:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:13.984 14:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.984 14:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.550 nvme0n1 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: ]] 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:14.550 14:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:14.551 14:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.551 14:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:14.551 14:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.551 14:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.551 14:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.551 14:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.551 14:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.551 14:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.551 14:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.551 14:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.551 14:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.551 14:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.551 14:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.551 14:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.551 14:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.551 14:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.551 14:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:14.551 14:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.551 14:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.484 nvme0n1 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: ]] 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.484 14:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.416 nvme0n1 00:25:16.416 14:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.416 14:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.416 14:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.416 14:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.416 14:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.416 14:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.416 14:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: ]] 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.417 14:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.352 nvme0n1 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: ]] 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.352 14:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.339 nvme0n1 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.339 14:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.298 nvme0n1 00:25:19.298 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.298 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.298 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.298 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.298 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.298 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.298 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.298 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.298 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.298 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.556 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.556 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:19.556 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:19.556 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: ]] 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.557 nvme0n1 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: ]] 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.557 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.816 nvme0n1 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: ]] 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.816 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.074 nvme0n1 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: ]] 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.074 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.075 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.075 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.075 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.075 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.075 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.075 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.075 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.075 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.075 14:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.075 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:20.075 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.075 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.332 nvme0n1 00:25:20.332 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.332 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.332 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.332 14:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.332 14:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.332 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.333 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.333 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.333 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:20.333 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.333 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.590 nvme0n1 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: ]] 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.590 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.848 nvme0n1 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: ]] 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.848 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.105 nvme0n1 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: ]] 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:21.105 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.106 14:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.363 nvme0n1 00:25:21.363 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.363 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.363 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.363 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.363 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.363 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.363 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.363 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.363 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.363 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.363 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.363 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.363 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:21.363 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.363 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:21.363 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:21.363 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: ]] 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.364 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.621 nvme0n1 00:25:21.621 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.621 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.621 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.621 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.621 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.621 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.622 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.880 nvme0n1 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: ]] 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.880 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.446 nvme0n1 00:25:22.446 14:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.446 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.446 14:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: ]] 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.446 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.704 nvme0n1 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: ]] 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.704 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.705 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.705 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.705 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.705 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.705 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:22.705 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.705 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.962 nvme0n1 00:25:22.962 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.962 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.962 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.962 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.962 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.219 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.219 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.219 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.219 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.219 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.219 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.219 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.219 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:23.219 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.219 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:23.219 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:23.219 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: ]] 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.220 14:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.478 nvme0n1 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.478 14:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.736 nvme0n1 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: ]] 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.736 14:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.302 nvme0n1 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: ]] 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.302 14:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.234 nvme0n1 00:25:25.234 14:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.234 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.234 14:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.234 14:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.234 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.234 14:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.234 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.234 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.234 14:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.234 14:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: ]] 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.235 14:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.800 nvme0n1 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: ]] 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.800 14:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.364 nvme0n1 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.364 14:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.927 nvme0n1 00:25:26.927 14:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.927 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.927 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.927 14:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.927 14:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.927 14:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.927 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.927 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.927 14:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.927 14:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.927 14:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.927 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:26.927 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.927 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:26.927 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.927 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:26.927 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjFjNDQwOGM3NjBkZjdiNjUxNzMzMDkyN2MzOWQ0ZGYQsnMf: 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: ]] 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzA2ZTQ1NzIwOWY5Zjg5YzJiMmY1Nzc1ZGY3ODgxNDYxMGE1Y2Y0MWViYWUxNDBiYWZkYzg1MmE4MWI0ZjI0N1S4BP8=: 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.928 14:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.858 nvme0n1 00:25:27.858 14:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.858 14:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.858 14:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.858 14:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.858 14:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: ]] 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.859 14:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.789 nvme0n1 00:25:28.789 14:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.789 14:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.789 14:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.789 14:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.789 14:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU3OWJhMGRkZGM5NGFhZTE3NGZlNWRlZGU4NmU0NjbI6Tzq: 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: ]] 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQzM2M3Y2Y4ODFmNTQwMjliNjQ3YjdhNjVkYzcwODk8DVEz: 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.046 14:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.978 nvme0n1 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWFiNDU0ZjVhMTY2NmM2Yjg0MGE5NWY2YjM4ZTIyNDljNzI0YzEyZTcxOTM5YzUwxqdYUg==: 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: ]] 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YThkMjdlNzM5NDc1ZmQ1MmI4NTdjYjc2NmNmOTkxYWaXS2a7: 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.978 14:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.911 nvme0n1 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTU0NDcwMDQxMDU1ZjBiMDNkMmY5MzM3YjJkNTIxMmQ3NDU1ZWU0ZDgxNmI0ZTVkMjY0NGQzOGNmNWJmYmVmY3hJPG4=: 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.911 14:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.844 nvme0n1 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFmZmIwOTBmZDE5NGE4OTA3OGQyZjRjZTdiMzk2Zjk5ZDVmNTBmNzcxNGRhOGYxLVa/9A==: 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: ]] 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmU2ODUxMTJhM2FmYWU1OGU0NWI3NDFhNzJkZjFlOThiMjM5NjQxMDViYWEwYjFlgH+lVw==: 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.844 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.102 request: 00:25:32.102 { 00:25:32.102 "name": "nvme0", 00:25:32.102 "trtype": "tcp", 00:25:32.102 "traddr": "10.0.0.1", 00:25:32.102 "adrfam": "ipv4", 00:25:32.102 "trsvcid": "4420", 00:25:32.102 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:32.102 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:32.102 "prchk_reftag": false, 00:25:32.102 "prchk_guard": false, 00:25:32.102 "hdgst": false, 00:25:32.102 "ddgst": false, 00:25:32.102 "method": "bdev_nvme_attach_controller", 00:25:32.102 "req_id": 1 00:25:32.102 } 00:25:32.102 Got JSON-RPC error response 00:25:32.102 response: 00:25:32.102 { 00:25:32.102 "code": -5, 00:25:32.102 "message": "Input/output error" 00:25:32.102 } 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.102 request: 00:25:32.102 { 00:25:32.102 "name": "nvme0", 00:25:32.102 "trtype": "tcp", 00:25:32.102 "traddr": "10.0.0.1", 00:25:32.102 "adrfam": "ipv4", 00:25:32.102 "trsvcid": "4420", 00:25:32.102 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:32.102 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:32.102 "prchk_reftag": false, 00:25:32.102 "prchk_guard": false, 00:25:32.102 "hdgst": false, 00:25:32.102 "ddgst": false, 00:25:32.102 "dhchap_key": "key2", 00:25:32.102 "method": "bdev_nvme_attach_controller", 00:25:32.102 "req_id": 1 00:25:32.102 } 00:25:32.102 Got JSON-RPC error response 00:25:32.102 response: 00:25:32.102 { 00:25:32.102 "code": -5, 00:25:32.102 "message": "Input/output error" 00:25:32.102 } 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:32.102 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.103 14:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.360 request: 00:25:32.360 { 00:25:32.360 "name": "nvme0", 00:25:32.360 "trtype": "tcp", 00:25:32.360 "traddr": "10.0.0.1", 00:25:32.360 "adrfam": "ipv4", 00:25:32.360 "trsvcid": "4420", 00:25:32.360 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:32.360 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:32.360 "prchk_reftag": false, 00:25:32.360 "prchk_guard": false, 00:25:32.360 "hdgst": false, 00:25:32.360 "ddgst": false, 00:25:32.360 "dhchap_key": "key1", 00:25:32.360 "dhchap_ctrlr_key": "ckey2", 00:25:32.360 "method": "bdev_nvme_attach_controller", 00:25:32.360 "req_id": 1 00:25:32.360 } 00:25:32.360 Got JSON-RPC error response 00:25:32.360 response: 00:25:32.360 { 00:25:32.360 "code": -5, 00:25:32.360 "message": "Input/output error" 00:25:32.360 } 00:25:32.360 14:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:32.360 14:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:32.360 14:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:32.360 14:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:32.360 14:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:32.360 14:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:25:32.360 14:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:25:32.360 14:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:32.360 14:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:32.360 14:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:25:32.360 14:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:32.360 14:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:25:32.360 14:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:32.360 14:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:32.360 rmmod nvme_tcp 00:25:32.360 rmmod nvme_fabrics 00:25:32.360 14:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:32.360 14:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:25:32.360 14:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:25:32.360 14:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3842238 ']' 00:25:32.360 14:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3842238 00:25:32.360 14:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 3842238 ']' 00:25:32.360 14:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 3842238 00:25:32.360 14:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:25:32.360 14:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:32.360 14:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3842238 00:25:32.360 14:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:32.360 14:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:32.360 14:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3842238' 00:25:32.360 killing process with pid 3842238 00:25:32.360 14:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 3842238 00:25:32.360 14:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 3842238 00:25:32.618 14:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:32.618 14:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:32.618 14:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:32.618 14:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:32.618 14:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:32.618 14:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.618 14:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:32.618 14:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.149 14:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:35.149 14:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:35.149 14:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:35.149 14:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:35.149 14:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:35.149 14:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:25:35.149 14:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:35.149 14:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:35.149 14:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:35.149 14:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:35.149 14:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:35.149 14:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:35.149 14:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:36.083 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:36.083 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:36.083 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:36.083 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:36.083 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:36.083 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:36.083 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:36.083 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:36.083 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:36.083 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:36.083 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:36.083 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:36.083 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:36.083 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:36.083 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:36.083 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:37.018 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:25:37.018 14:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.eYk /tmp/spdk.key-null.vzK /tmp/spdk.key-sha256.csZ /tmp/spdk.key-sha384.bHB /tmp/spdk.key-sha512.JMV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:37.018 14:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:38.391 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:38.391 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:38.391 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:38.391 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:38.391 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:38.391 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:38.391 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:38.391 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:38.391 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:38.391 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:38.391 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:38.391 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:38.391 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:38.391 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:38.391 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:38.391 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:38.391 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:38.391 00:25:38.391 real 0m50.407s 00:25:38.391 user 0m47.964s 00:25:38.391 sys 0m5.955s 00:25:38.391 14:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:38.391 14:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.391 ************************************ 00:25:38.391 END TEST nvmf_auth_host 00:25:38.391 ************************************ 00:25:38.391 14:04:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:38.391 14:04:33 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:25:38.391 14:04:33 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:38.391 14:04:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:38.391 14:04:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:38.391 14:04:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:38.391 ************************************ 00:25:38.391 START TEST nvmf_digest 00:25:38.391 ************************************ 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:38.391 * Looking for test storage... 00:25:38.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:25:38.391 14:04:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:40.918 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:40.918 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.918 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:40.918 Found net devices under 0000:84:00.0: cvl_0_0 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:40.919 Found net devices under 0000:84:00.1: cvl_0_1 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:40.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:40.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:25:40.919 00:25:40.919 --- 10.0.0.2 ping statistics --- 00:25:40.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.919 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:40.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:40.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:25:40.919 00:25:40.919 --- 10.0.0.1 ping statistics --- 00:25:40.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.919 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:40.919 ************************************ 00:25:40.919 START TEST nvmf_digest_clean 00:25:40.919 ************************************ 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3851869 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3851869 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3851869 ']' 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:40.919 [2024-07-15 14:04:35.450994] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:25:40.919 [2024-07-15 14:04:35.451090] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.919 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.919 [2024-07-15 14:04:35.515912] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.919 [2024-07-15 14:04:35.625042] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.919 [2024-07-15 14:04:35.625096] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.919 [2024-07-15 14:04:35.625120] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:40.919 [2024-07-15 14:04:35.625130] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:40.919 [2024-07-15 14:04:35.625140] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.919 [2024-07-15 14:04:35.625166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.919 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:41.177 null0 00:25:41.177 [2024-07-15 14:04:35.796984] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:41.177 [2024-07-15 14:04:35.821227] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.177 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.177 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:41.177 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:41.177 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:41.177 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:41.177 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:41.177 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:41.177 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:41.177 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3851896 00:25:41.177 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:41.177 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3851896 /var/tmp/bperf.sock 00:25:41.177 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3851896 ']' 00:25:41.177 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:41.177 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:41.177 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:41.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:41.177 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:41.177 14:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:41.177 [2024-07-15 14:04:35.866863] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:25:41.177 [2024-07-15 14:04:35.866935] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3851896 ] 00:25:41.177 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.177 [2024-07-15 14:04:35.925063] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.453 [2024-07-15 14:04:36.032074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.453 14:04:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:41.453 14:04:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:41.453 14:04:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:41.453 14:04:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:41.453 14:04:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:41.762 14:04:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:41.762 14:04:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:42.327 nvme0n1 00:25:42.327 14:04:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:42.327 14:04:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:42.327 Running I/O for 2 seconds... 00:25:44.224 00:25:44.224 Latency(us) 00:25:44.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.224 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:44.224 nvme0n1 : 2.00 20515.25 80.14 0.00 0.00 6232.30 2779.21 17573.36 00:25:44.224 =================================================================================================================== 00:25:44.224 Total : 20515.25 80.14 0.00 0.00 6232.30 2779.21 17573.36 00:25:44.224 0 00:25:44.224 14:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:44.224 14:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:44.224 14:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:44.224 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:44.224 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:44.224 | select(.opcode=="crc32c") 00:25:44.224 | "\(.module_name) \(.executed)"' 00:25:44.482 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:44.482 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:44.482 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:44.482 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:44.482 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3851896 00:25:44.482 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3851896 ']' 00:25:44.482 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3851896 00:25:44.482 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:44.482 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:44.482 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3851896 00:25:44.482 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:44.482 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:44.482 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3851896' 00:25:44.482 killing process with pid 3851896 00:25:44.482 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3851896 00:25:44.482 Received shutdown signal, test time was about 2.000000 seconds 00:25:44.482 00:25:44.482 Latency(us) 00:25:44.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.482 =================================================================================================================== 00:25:44.482 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:44.482 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3851896 00:25:44.738 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:44.738 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:44.738 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:44.738 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:44.738 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:44.738 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:44.738 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:44.738 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3852308 00:25:44.738 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:44.738 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3852308 /var/tmp/bperf.sock 00:25:44.738 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3852308 ']' 00:25:44.738 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:44.738 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:44.738 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:44.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:44.738 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:44.738 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:44.995 [2024-07-15 14:04:39.587172] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:25:44.995 [2024-07-15 14:04:39.587249] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852308 ] 00:25:44.995 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:44.995 Zero copy mechanism will not be used. 00:25:44.995 EAL: No free 2048 kB hugepages reported on node 1 00:25:44.995 [2024-07-15 14:04:39.648857] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.995 [2024-07-15 14:04:39.759164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:44.996 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:44.996 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:44.996 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:44.996 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:44.996 14:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:45.561 14:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:45.561 14:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:45.818 nvme0n1 00:25:45.818 14:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:45.818 14:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:46.076 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:46.076 Zero copy mechanism will not be used. 00:25:46.076 Running I/O for 2 seconds... 00:25:47.974 00:25:47.974 Latency(us) 00:25:47.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.974 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:47.974 nvme0n1 : 2.00 4223.33 527.92 0.00 0.00 3784.54 725.14 8252.68 00:25:47.974 =================================================================================================================== 00:25:47.974 Total : 4223.33 527.92 0.00 0.00 3784.54 725.14 8252.68 00:25:47.974 0 00:25:47.974 14:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:47.974 14:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:47.974 14:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:47.974 14:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:47.974 | select(.opcode=="crc32c") 00:25:47.974 | "\(.module_name) \(.executed)"' 00:25:47.974 14:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:48.232 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:48.232 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:48.232 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:48.232 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:48.232 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3852308 00:25:48.232 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3852308 ']' 00:25:48.232 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3852308 00:25:48.232 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:48.232 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:48.232 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3852308 00:25:48.232 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:48.232 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:48.232 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3852308' 00:25:48.232 killing process with pid 3852308 00:25:48.232 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3852308 00:25:48.232 Received shutdown signal, test time was about 2.000000 seconds 00:25:48.232 00:25:48.232 Latency(us) 00:25:48.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.232 =================================================================================================================== 00:25:48.232 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:48.232 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3852308 00:25:48.490 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:48.490 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:48.490 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:48.490 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:48.490 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:48.490 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:48.490 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:48.490 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3852827 00:25:48.490 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:48.490 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3852827 /var/tmp/bperf.sock 00:25:48.490 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3852827 ']' 00:25:48.490 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:48.490 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:48.490 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:48.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:48.490 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:48.490 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:48.748 [2024-07-15 14:04:43.363818] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:25:48.748 [2024-07-15 14:04:43.363895] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852827 ] 00:25:48.748 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.748 [2024-07-15 14:04:43.423462] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.748 [2024-07-15 14:04:43.532483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:48.748 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:48.748 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:48.748 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:48.748 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:48.748 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:49.313 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:49.313 14:04:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:49.570 nvme0n1 00:25:49.828 14:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:49.828 14:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:49.828 Running I/O for 2 seconds... 00:25:51.726 00:25:51.726 Latency(us) 00:25:51.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.726 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:51.726 nvme0n1 : 2.00 23848.46 93.16 0.00 0.00 5359.27 2245.21 15243.19 00:25:51.726 =================================================================================================================== 00:25:51.726 Total : 23848.46 93.16 0.00 0.00 5359.27 2245.21 15243.19 00:25:51.726 0 00:25:51.726 14:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:51.726 14:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:51.726 14:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:51.726 14:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:51.726 14:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:51.726 | select(.opcode=="crc32c") 00:25:51.726 | "\(.module_name) \(.executed)"' 00:25:51.983 14:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:51.983 14:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:51.984 14:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:51.984 14:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:51.984 14:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3852827 00:25:51.984 14:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3852827 ']' 00:25:51.984 14:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3852827 00:25:51.984 14:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:51.984 14:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:51.984 14:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3852827 00:25:52.241 14:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:52.241 14:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:52.241 14:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3852827' 00:25:52.241 killing process with pid 3852827 00:25:52.241 14:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3852827 00:25:52.241 Received shutdown signal, test time was about 2.000000 seconds 00:25:52.241 00:25:52.241 Latency(us) 00:25:52.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:52.241 =================================================================================================================== 00:25:52.241 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:52.241 14:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3852827 00:25:52.241 14:04:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:52.241 14:04:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:52.241 14:04:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:52.241 14:04:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:52.241 14:04:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:52.241 14:04:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:52.241 14:04:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:52.241 14:04:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3853241 00:25:52.241 14:04:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:52.242 14:04:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3853241 /var/tmp/bperf.sock 00:25:52.242 14:04:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3853241 ']' 00:25:52.242 14:04:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:52.242 14:04:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:52.242 14:04:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:52.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:52.242 14:04:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:52.242 14:04:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:52.499 [2024-07-15 14:04:47.111255] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:25:52.499 [2024-07-15 14:04:47.111336] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3853241 ] 00:25:52.499 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:52.499 Zero copy mechanism will not be used. 00:25:52.499 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.499 [2024-07-15 14:04:47.170475] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.499 [2024-07-15 14:04:47.278818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.499 14:04:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:52.499 14:04:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:52.499 14:04:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:52.499 14:04:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:52.499 14:04:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:53.065 14:04:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:53.065 14:04:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:53.322 nvme0n1 00:25:53.322 14:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:53.322 14:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:53.581 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:53.581 Zero copy mechanism will not be used. 00:25:53.581 Running I/O for 2 seconds... 00:25:55.480 00:25:55.480 Latency(us) 00:25:55.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:55.480 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:55.480 nvme0n1 : 2.00 4521.70 565.21 0.00 0.00 3530.53 2548.62 13204.29 00:25:55.480 =================================================================================================================== 00:25:55.480 Total : 4521.70 565.21 0.00 0.00 3530.53 2548.62 13204.29 00:25:55.480 0 00:25:55.480 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:55.480 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:55.480 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:55.480 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:55.480 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:55.480 | select(.opcode=="crc32c") 00:25:55.480 | "\(.module_name) \(.executed)"' 00:25:55.738 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:55.738 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:55.738 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:55.738 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:55.738 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3853241 00:25:55.738 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3853241 ']' 00:25:55.738 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3853241 00:25:55.738 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:55.738 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:55.738 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3853241 00:25:55.738 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:55.738 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:55.738 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3853241' 00:25:55.738 killing process with pid 3853241 00:25:55.738 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3853241 00:25:55.738 Received shutdown signal, test time was about 2.000000 seconds 00:25:55.738 00:25:55.738 Latency(us) 00:25:55.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:55.738 =================================================================================================================== 00:25:55.738 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:55.738 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3853241 00:25:55.995 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3851869 00:25:55.995 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3851869 ']' 00:25:55.995 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3851869 00:25:55.995 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:55.995 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:55.996 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3851869 00:25:56.253 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:56.253 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:56.253 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3851869' 00:25:56.253 killing process with pid 3851869 00:25:56.253 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3851869 00:25:56.253 14:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3851869 00:25:56.511 00:25:56.511 real 0m15.717s 00:25:56.511 user 0m30.389s 00:25:56.511 sys 0m5.087s 00:25:56.511 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:56.511 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:56.511 ************************************ 00:25:56.511 END TEST nvmf_digest_clean 00:25:56.511 ************************************ 00:25:56.511 14:04:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:25:56.511 14:04:51 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:56.511 14:04:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:56.511 14:04:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:56.511 14:04:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:56.511 ************************************ 00:25:56.511 START TEST nvmf_digest_error 00:25:56.511 ************************************ 00:25:56.511 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:25:56.511 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:56.511 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:56.511 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:56.511 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:56.511 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3853794 00:25:56.511 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:56.511 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3853794 00:25:56.511 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3853794 ']' 00:25:56.511 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:56.511 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:56.511 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:56.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:56.511 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:56.511 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:56.511 [2024-07-15 14:04:51.223856] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:25:56.511 [2024-07-15 14:04:51.223938] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:56.511 EAL: No free 2048 kB hugepages reported on node 1 00:25:56.511 [2024-07-15 14:04:51.286634] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.769 [2024-07-15 14:04:51.387205] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:56.769 [2024-07-15 14:04:51.387259] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:56.769 [2024-07-15 14:04:51.387286] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:56.769 [2024-07-15 14:04:51.387297] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:56.769 [2024-07-15 14:04:51.387307] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:56.769 [2024-07-15 14:04:51.387339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.769 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:56.769 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:56.769 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:56.769 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:56.769 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:56.769 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:56.769 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:56.769 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.769 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:56.769 [2024-07-15 14:04:51.451850] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:56.769 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.769 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:56.769 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:56.769 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.769 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:56.769 null0 00:25:56.769 [2024-07-15 14:04:51.567065] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:56.769 [2024-07-15 14:04:51.591271] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:56.769 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.769 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:56.769 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:56.769 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:56.769 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:56.769 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:56.769 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3853823 00:25:56.769 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:56.769 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3853823 /var/tmp/bperf.sock 00:25:56.769 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3853823 ']' 00:25:56.769 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:56.769 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:56.769 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:56.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:56.769 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:56.769 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:57.026 [2024-07-15 14:04:51.636461] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:25:57.026 [2024-07-15 14:04:51.636538] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3853823 ] 00:25:57.026 EAL: No free 2048 kB hugepages reported on node 1 00:25:57.026 [2024-07-15 14:04:51.694704] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.026 [2024-07-15 14:04:51.799678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:57.283 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:57.283 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:57.283 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:57.283 14:04:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:57.540 14:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:57.540 14:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.540 14:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:57.540 14:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.540 14:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:57.540 14:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:57.797 nvme0n1 00:25:57.797 14:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:57.797 14:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.797 14:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:57.797 14:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.797 14:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:57.797 14:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:58.055 Running I/O for 2 seconds... 00:25:58.055 [2024-07-15 14:04:52.696797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.055 [2024-07-15 14:04:52.696859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.055 [2024-07-15 14:04:52.696879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.055 [2024-07-15 14:04:52.712273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.055 [2024-07-15 14:04:52.712302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.055 [2024-07-15 14:04:52.712334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.055 [2024-07-15 14:04:52.725926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.055 [2024-07-15 14:04:52.725956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.055 [2024-07-15 14:04:52.725989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.055 [2024-07-15 14:04:52.737196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.055 [2024-07-15 14:04:52.737227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.055 [2024-07-15 14:04:52.737269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.055 [2024-07-15 14:04:52.751996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.055 [2024-07-15 14:04:52.752026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.055 [2024-07-15 14:04:52.752058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.055 [2024-07-15 14:04:52.764371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.055 [2024-07-15 14:04:52.764400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.055 [2024-07-15 14:04:52.764431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.055 [2024-07-15 14:04:52.775715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.055 [2024-07-15 14:04:52.775766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.055 [2024-07-15 14:04:52.775785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.055 [2024-07-15 14:04:52.786191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.055 [2024-07-15 14:04:52.786220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.055 [2024-07-15 14:04:52.786251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.055 [2024-07-15 14:04:52.799677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.055 [2024-07-15 14:04:52.799705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.055 [2024-07-15 14:04:52.799743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.055 [2024-07-15 14:04:52.813423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.055 [2024-07-15 14:04:52.813452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.055 [2024-07-15 14:04:52.813485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.055 [2024-07-15 14:04:52.823821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.055 [2024-07-15 14:04:52.823850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.055 [2024-07-15 14:04:52.823883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.055 [2024-07-15 14:04:52.836411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.055 [2024-07-15 14:04:52.836438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.055 [2024-07-15 14:04:52.836469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.055 [2024-07-15 14:04:52.848349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.055 [2024-07-15 14:04:52.848377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.055 [2024-07-15 14:04:52.848410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.055 [2024-07-15 14:04:52.858296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.055 [2024-07-15 14:04:52.858324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.055 [2024-07-15 14:04:52.858355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.055 [2024-07-15 14:04:52.873133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.055 [2024-07-15 14:04:52.873161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.055 [2024-07-15 14:04:52.873192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.055 [2024-07-15 14:04:52.884280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.055 [2024-07-15 14:04:52.884308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.055 [2024-07-15 14:04:52.884339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.312 [2024-07-15 14:04:52.896548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.312 [2024-07-15 14:04:52.896578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.312 [2024-07-15 14:04:52.896610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.312 [2024-07-15 14:04:52.907566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.312 [2024-07-15 14:04:52.907596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.312 [2024-07-15 14:04:52.907628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.312 [2024-07-15 14:04:52.919332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.312 [2024-07-15 14:04:52.919361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.312 [2024-07-15 14:04:52.919392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.312 [2024-07-15 14:04:52.931898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.312 [2024-07-15 14:04:52.931928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.312 [2024-07-15 14:04:52.931960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.312 [2024-07-15 14:04:52.943192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.312 [2024-07-15 14:04:52.943220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.312 [2024-07-15 14:04:52.943259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.312 [2024-07-15 14:04:52.953586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.312 [2024-07-15 14:04:52.953615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.312 [2024-07-15 14:04:52.953646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.312 [2024-07-15 14:04:52.966572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.313 [2024-07-15 14:04:52.966600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.313 [2024-07-15 14:04:52.966631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.313 [2024-07-15 14:04:52.978783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.313 [2024-07-15 14:04:52.978812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.313 [2024-07-15 14:04:52.978843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.313 [2024-07-15 14:04:52.990273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.313 [2024-07-15 14:04:52.990302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.313 [2024-07-15 14:04:52.990332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.313 [2024-07-15 14:04:53.001968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.313 [2024-07-15 14:04:53.001996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.313 [2024-07-15 14:04:53.002028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.313 [2024-07-15 14:04:53.012484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.313 [2024-07-15 14:04:53.012512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.313 [2024-07-15 14:04:53.012543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.313 [2024-07-15 14:04:53.027035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.313 [2024-07-15 14:04:53.027064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.313 [2024-07-15 14:04:53.027094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.313 [2024-07-15 14:04:53.040061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.313 [2024-07-15 14:04:53.040089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.313 [2024-07-15 14:04:53.040104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.313 [2024-07-15 14:04:53.050472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.313 [2024-07-15 14:04:53.050506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.313 [2024-07-15 14:04:53.050539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.313 [2024-07-15 14:04:53.063307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.313 [2024-07-15 14:04:53.063336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.313 [2024-07-15 14:04:53.063368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.313 [2024-07-15 14:04:53.077839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.313 [2024-07-15 14:04:53.077868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.313 [2024-07-15 14:04:53.077901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.313 [2024-07-15 14:04:53.088597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.313 [2024-07-15 14:04:53.088625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.313 [2024-07-15 14:04:53.088656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.313 [2024-07-15 14:04:53.102983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.313 [2024-07-15 14:04:53.103012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.313 [2024-07-15 14:04:53.103045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.313 [2024-07-15 14:04:53.112625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.313 [2024-07-15 14:04:53.112653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.313 [2024-07-15 14:04:53.112683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.313 [2024-07-15 14:04:53.124213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.313 [2024-07-15 14:04:53.124240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.313 [2024-07-15 14:04:53.124271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.313 [2024-07-15 14:04:53.136446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.313 [2024-07-15 14:04:53.136473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.313 [2024-07-15 14:04:53.136504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.313 [2024-07-15 14:04:53.147629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.313 [2024-07-15 14:04:53.147656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.313 [2024-07-15 14:04:53.147686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.569 [2024-07-15 14:04:53.161811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.569 [2024-07-15 14:04:53.161841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.569 [2024-07-15 14:04:53.161874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.569 [2024-07-15 14:04:53.173159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.569 [2024-07-15 14:04:53.173187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.569 [2024-07-15 14:04:53.173218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.569 [2024-07-15 14:04:53.187178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.569 [2024-07-15 14:04:53.187206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.569 [2024-07-15 14:04:53.187236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.569 [2024-07-15 14:04:53.199303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.569 [2024-07-15 14:04:53.199330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.569 [2024-07-15 14:04:53.199360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.569 [2024-07-15 14:04:53.209555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.569 [2024-07-15 14:04:53.209583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.569 [2024-07-15 14:04:53.209612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.569 [2024-07-15 14:04:53.223972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.569 [2024-07-15 14:04:53.224003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.569 [2024-07-15 14:04:53.224035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.569 [2024-07-15 14:04:53.236005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.569 [2024-07-15 14:04:53.236060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.569 [2024-07-15 14:04:53.236076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.569 [2024-07-15 14:04:53.246547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.569 [2024-07-15 14:04:53.246575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.569 [2024-07-15 14:04:53.246605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.569 [2024-07-15 14:04:53.259250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.569 [2024-07-15 14:04:53.259288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.569 [2024-07-15 14:04:53.259324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.569 [2024-07-15 14:04:53.269733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.569 [2024-07-15 14:04:53.269781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.569 [2024-07-15 14:04:53.269813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.569 [2024-07-15 14:04:53.283651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.569 [2024-07-15 14:04:53.283678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.569 [2024-07-15 14:04:53.283708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.569 [2024-07-15 14:04:53.298314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.569 [2024-07-15 14:04:53.298341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.569 [2024-07-15 14:04:53.298372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.569 [2024-07-15 14:04:53.308457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.569 [2024-07-15 14:04:53.308485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.570 [2024-07-15 14:04:53.308516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.570 [2024-07-15 14:04:53.322484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.570 [2024-07-15 14:04:53.322512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.570 [2024-07-15 14:04:53.322542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.570 [2024-07-15 14:04:53.337306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.570 [2024-07-15 14:04:53.337334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.570 [2024-07-15 14:04:53.337366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.570 [2024-07-15 14:04:53.352233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.570 [2024-07-15 14:04:53.352261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.570 [2024-07-15 14:04:53.352291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.570 [2024-07-15 14:04:53.361998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.570 [2024-07-15 14:04:53.362027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.570 [2024-07-15 14:04:53.362066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.570 [2024-07-15 14:04:53.374453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.570 [2024-07-15 14:04:53.374486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.570 [2024-07-15 14:04:53.374516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.570 [2024-07-15 14:04:53.386414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.570 [2024-07-15 14:04:53.386442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.570 [2024-07-15 14:04:53.386472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.570 [2024-07-15 14:04:53.396624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.570 [2024-07-15 14:04:53.396651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.570 [2024-07-15 14:04:53.396681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.570 [2024-07-15 14:04:53.408928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.570 [2024-07-15 14:04:53.408960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.570 [2024-07-15 14:04:53.408978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.827 [2024-07-15 14:04:53.421732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.827 [2024-07-15 14:04:53.421769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-07-15 14:04:53.421800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.827 [2024-07-15 14:04:53.432416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.827 [2024-07-15 14:04:53.432444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-07-15 14:04:53.432475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.827 [2024-07-15 14:04:53.443937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.827 [2024-07-15 14:04:53.443964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-07-15 14:04:53.443996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.827 [2024-07-15 14:04:53.456391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.827 [2024-07-15 14:04:53.456418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-07-15 14:04:53.456449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.827 [2024-07-15 14:04:53.466787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.827 [2024-07-15 14:04:53.466814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-07-15 14:04:53.466851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.827 [2024-07-15 14:04:53.476947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.827 [2024-07-15 14:04:53.476975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-07-15 14:04:53.477006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.827 [2024-07-15 14:04:53.491489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.827 [2024-07-15 14:04:53.491518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-07-15 14:04:53.491548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.827 [2024-07-15 14:04:53.505139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.827 [2024-07-15 14:04:53.505168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-07-15 14:04:53.505199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.827 [2024-07-15 14:04:53.514194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.827 [2024-07-15 14:04:53.514222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-07-15 14:04:53.514254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.827 [2024-07-15 14:04:53.526274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.827 [2024-07-15 14:04:53.526300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-07-15 14:04:53.526330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.827 [2024-07-15 14:04:53.537274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.827 [2024-07-15 14:04:53.537301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-07-15 14:04:53.537333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.827 [2024-07-15 14:04:53.549003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.827 [2024-07-15 14:04:53.549035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-07-15 14:04:53.549064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.827 [2024-07-15 14:04:53.560458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.827 [2024-07-15 14:04:53.560485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-07-15 14:04:53.560515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.827 [2024-07-15 14:04:53.572391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.827 [2024-07-15 14:04:53.572423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-07-15 14:04:53.572455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.827 [2024-07-15 14:04:53.583651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.827 [2024-07-15 14:04:53.583678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-07-15 14:04:53.583709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.827 [2024-07-15 14:04:53.594408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.827 [2024-07-15 14:04:53.594436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-07-15 14:04:53.594467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.827 [2024-07-15 14:04:53.607981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.827 [2024-07-15 14:04:53.608010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-07-15 14:04:53.608041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.827 [2024-07-15 14:04:53.621456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.827 [2024-07-15 14:04:53.621483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-07-15 14:04:53.621515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.827 [2024-07-15 14:04:53.631824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.827 [2024-07-15 14:04:53.631852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-07-15 14:04:53.631884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.827 [2024-07-15 14:04:53.643664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.827 [2024-07-15 14:04:53.643692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-07-15 14:04:53.643723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.827 [2024-07-15 14:04:53.657206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:58.827 [2024-07-15 14:04:53.657233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.827 [2024-07-15 14:04:53.657264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.084 [2024-07-15 14:04:53.669047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.084 [2024-07-15 14:04:53.669077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.084 [2024-07-15 14:04:53.669107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.084 [2024-07-15 14:04:53.680681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.084 [2024-07-15 14:04:53.680710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.084 [2024-07-15 14:04:53.680747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.084 [2024-07-15 14:04:53.691669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.084 [2024-07-15 14:04:53.691696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.084 [2024-07-15 14:04:53.691726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.084 [2024-07-15 14:04:53.704558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.085 [2024-07-15 14:04:53.704585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.085 [2024-07-15 14:04:53.704616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.085 [2024-07-15 14:04:53.715451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.085 [2024-07-15 14:04:53.715479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.085 [2024-07-15 14:04:53.715509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.085 [2024-07-15 14:04:53.729024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.085 [2024-07-15 14:04:53.729068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.085 [2024-07-15 14:04:53.729084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.085 [2024-07-15 14:04:53.743407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.085 [2024-07-15 14:04:53.743434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.085 [2024-07-15 14:04:53.743465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.085 [2024-07-15 14:04:53.754705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.085 [2024-07-15 14:04:53.754733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.085 [2024-07-15 14:04:53.754770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.085 [2024-07-15 14:04:53.765638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.085 [2024-07-15 14:04:53.765665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.085 [2024-07-15 14:04:53.765696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.085 [2024-07-15 14:04:53.778786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.085 [2024-07-15 14:04:53.778813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.085 [2024-07-15 14:04:53.778851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.085 [2024-07-15 14:04:53.788957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.085 [2024-07-15 14:04:53.788985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.085 [2024-07-15 14:04:53.789016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.085 [2024-07-15 14:04:53.801221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.085 [2024-07-15 14:04:53.801248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.085 [2024-07-15 14:04:53.801280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.085 [2024-07-15 14:04:53.812464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.085 [2024-07-15 14:04:53.812491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.085 [2024-07-15 14:04:53.812521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.085 [2024-07-15 14:04:53.823436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.085 [2024-07-15 14:04:53.823463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.085 [2024-07-15 14:04:53.823493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.085 [2024-07-15 14:04:53.835677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.085 [2024-07-15 14:04:53.835703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.085 [2024-07-15 14:04:53.835733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.085 [2024-07-15 14:04:53.845644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.085 [2024-07-15 14:04:53.845672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.085 [2024-07-15 14:04:53.845702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.085 [2024-07-15 14:04:53.858159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.085 [2024-07-15 14:04:53.858187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.085 [2024-07-15 14:04:53.858217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.085 [2024-07-15 14:04:53.871587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.085 [2024-07-15 14:04:53.871614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.085 [2024-07-15 14:04:53.871645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.085 [2024-07-15 14:04:53.880891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.085 [2024-07-15 14:04:53.880924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.085 [2024-07-15 14:04:53.880955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.085 [2024-07-15 14:04:53.894911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.085 [2024-07-15 14:04:53.894940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.085 [2024-07-15 14:04:53.894972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.085 [2024-07-15 14:04:53.910240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.085 [2024-07-15 14:04:53.910269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.085 [2024-07-15 14:04:53.910301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.085 [2024-07-15 14:04:53.924941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.085 [2024-07-15 14:04:53.924975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.085 [2024-07-15 14:04:53.924992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.342 [2024-07-15 14:04:53.940995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.342 [2024-07-15 14:04:53.941026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.342 [2024-07-15 14:04:53.941058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.342 [2024-07-15 14:04:53.950589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.342 [2024-07-15 14:04:53.950617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.342 [2024-07-15 14:04:53.950648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.342 [2024-07-15 14:04:53.964791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.342 [2024-07-15 14:04:53.964821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.342 [2024-07-15 14:04:53.964852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.342 [2024-07-15 14:04:53.978507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.342 [2024-07-15 14:04:53.978535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.342 [2024-07-15 14:04:53.978566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.342 [2024-07-15 14:04:53.988562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.342 [2024-07-15 14:04:53.988589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.342 [2024-07-15 14:04:53.988626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.342 [2024-07-15 14:04:54.003391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.342 [2024-07-15 14:04:54.003419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.342 [2024-07-15 14:04:54.003449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.342 [2024-07-15 14:04:54.016166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.342 [2024-07-15 14:04:54.016193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.342 [2024-07-15 14:04:54.016223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.342 [2024-07-15 14:04:54.026605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.342 [2024-07-15 14:04:54.026633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.342 [2024-07-15 14:04:54.026664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.342 [2024-07-15 14:04:54.038028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.342 [2024-07-15 14:04:54.038070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.342 [2024-07-15 14:04:54.038086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.342 [2024-07-15 14:04:54.049513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.342 [2024-07-15 14:04:54.049540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.342 [2024-07-15 14:04:54.049570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.342 [2024-07-15 14:04:54.061735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.343 [2024-07-15 14:04:54.061785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.343 [2024-07-15 14:04:54.061800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.343 [2024-07-15 14:04:54.072777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.343 [2024-07-15 14:04:54.072805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.343 [2024-07-15 14:04:54.072836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.343 [2024-07-15 14:04:54.085611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.343 [2024-07-15 14:04:54.085639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.343 [2024-07-15 14:04:54.085670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.343 [2024-07-15 14:04:54.096643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.343 [2024-07-15 14:04:54.096675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.343 [2024-07-15 14:04:54.096705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.343 [2024-07-15 14:04:54.106670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.343 [2024-07-15 14:04:54.106697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.343 [2024-07-15 14:04:54.106727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.343 [2024-07-15 14:04:54.120494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.343 [2024-07-15 14:04:54.120522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.343 [2024-07-15 14:04:54.120552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.343 [2024-07-15 14:04:54.135184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.343 [2024-07-15 14:04:54.135212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.343 [2024-07-15 14:04:54.135243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.343 [2024-07-15 14:04:54.146461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.343 [2024-07-15 14:04:54.146489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.343 [2024-07-15 14:04:54.146520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.343 [2024-07-15 14:04:54.162854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.343 [2024-07-15 14:04:54.162882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.343 [2024-07-15 14:04:54.162913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.343 [2024-07-15 14:04:54.173156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.343 [2024-07-15 14:04:54.173183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.343 [2024-07-15 14:04:54.173214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.600 [2024-07-15 14:04:54.186969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.600 [2024-07-15 14:04:54.187015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.600 [2024-07-15 14:04:54.187033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.600 [2024-07-15 14:04:54.201144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.600 [2024-07-15 14:04:54.201173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.600 [2024-07-15 14:04:54.201205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.600 [2024-07-15 14:04:54.212030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.600 [2024-07-15 14:04:54.212073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.600 [2024-07-15 14:04:54.212089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.600 [2024-07-15 14:04:54.225808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.600 [2024-07-15 14:04:54.225839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.600 [2024-07-15 14:04:54.225856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.600 [2024-07-15 14:04:54.238769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.600 [2024-07-15 14:04:54.238797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.600 [2024-07-15 14:04:54.238829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.600 [2024-07-15 14:04:54.250327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.600 [2024-07-15 14:04:54.250355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.600 [2024-07-15 14:04:54.250388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.600 [2024-07-15 14:04:54.262015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.600 [2024-07-15 14:04:54.262059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.600 [2024-07-15 14:04:54.262075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.600 [2024-07-15 14:04:54.272966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.600 [2024-07-15 14:04:54.272995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.601 [2024-07-15 14:04:54.273026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.601 [2024-07-15 14:04:54.285713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.601 [2024-07-15 14:04:54.285763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.601 [2024-07-15 14:04:54.285780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.601 [2024-07-15 14:04:54.296662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.601 [2024-07-15 14:04:54.296690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.601 [2024-07-15 14:04:54.296721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.601 [2024-07-15 14:04:54.309067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.601 [2024-07-15 14:04:54.309095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.601 [2024-07-15 14:04:54.309135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.601 [2024-07-15 14:04:54.320777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.601 [2024-07-15 14:04:54.320805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.601 [2024-07-15 14:04:54.320837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.601 [2024-07-15 14:04:54.332202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.601 [2024-07-15 14:04:54.332230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.601 [2024-07-15 14:04:54.332261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.601 [2024-07-15 14:04:54.344986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.601 [2024-07-15 14:04:54.345015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.601 [2024-07-15 14:04:54.345046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.601 [2024-07-15 14:04:54.354932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.601 [2024-07-15 14:04:54.354960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.601 [2024-07-15 14:04:54.354991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.601 [2024-07-15 14:04:54.367445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.601 [2024-07-15 14:04:54.367473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.601 [2024-07-15 14:04:54.367507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.601 [2024-07-15 14:04:54.379338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.601 [2024-07-15 14:04:54.379366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.601 [2024-07-15 14:04:54.379396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.601 [2024-07-15 14:04:54.390298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.601 [2024-07-15 14:04:54.390327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.601 [2024-07-15 14:04:54.390356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.601 [2024-07-15 14:04:54.402288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.601 [2024-07-15 14:04:54.402318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.601 [2024-07-15 14:04:54.402348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.601 [2024-07-15 14:04:54.414885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.601 [2024-07-15 14:04:54.414921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.601 [2024-07-15 14:04:54.414954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.601 [2024-07-15 14:04:54.429059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.601 [2024-07-15 14:04:54.429103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.601 [2024-07-15 14:04:54.429119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.601 [2024-07-15 14:04:54.439013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.601 [2024-07-15 14:04:54.439045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.601 [2024-07-15 14:04:54.439063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.859 [2024-07-15 14:04:54.452210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.859 [2024-07-15 14:04:54.452241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.859 [2024-07-15 14:04:54.452273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.859 [2024-07-15 14:04:54.465547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.859 [2024-07-15 14:04:54.465592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.859 [2024-07-15 14:04:54.465611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.859 [2024-07-15 14:04:54.478712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.859 [2024-07-15 14:04:54.478765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.859 [2024-07-15 14:04:54.478799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.859 [2024-07-15 14:04:54.489646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.859 [2024-07-15 14:04:54.489675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.859 [2024-07-15 14:04:54.489716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.859 [2024-07-15 14:04:54.503308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.859 [2024-07-15 14:04:54.503336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.859 [2024-07-15 14:04:54.503367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.859 [2024-07-15 14:04:54.516612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.859 [2024-07-15 14:04:54.516641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.859 [2024-07-15 14:04:54.516673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.859 [2024-07-15 14:04:54.527014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.859 [2024-07-15 14:04:54.527044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.859 [2024-07-15 14:04:54.527075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.859 [2024-07-15 14:04:54.540922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.859 [2024-07-15 14:04:54.540955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.859 [2024-07-15 14:04:54.540988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.859 [2024-07-15 14:04:54.553701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.859 [2024-07-15 14:04:54.553749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.859 [2024-07-15 14:04:54.553768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.859 [2024-07-15 14:04:54.564309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.859 [2024-07-15 14:04:54.564336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.859 [2024-07-15 14:04:54.564368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.859 [2024-07-15 14:04:54.576955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.859 [2024-07-15 14:04:54.576984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.859 [2024-07-15 14:04:54.577017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.859 [2024-07-15 14:04:54.589563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.860 [2024-07-15 14:04:54.589591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.860 [2024-07-15 14:04:54.589622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.860 [2024-07-15 14:04:54.601121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.860 [2024-07-15 14:04:54.601149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.860 [2024-07-15 14:04:54.601180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.860 [2024-07-15 14:04:54.614235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.860 [2024-07-15 14:04:54.614279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.860 [2024-07-15 14:04:54.614310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.860 [2024-07-15 14:04:54.627274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.860 [2024-07-15 14:04:54.627308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.860 [2024-07-15 14:04:54.627340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.860 [2024-07-15 14:04:54.638970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.860 [2024-07-15 14:04:54.638999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.860 [2024-07-15 14:04:54.639031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.860 [2024-07-15 14:04:54.653273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.860 [2024-07-15 14:04:54.653300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.860 [2024-07-15 14:04:54.653332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.860 [2024-07-15 14:04:54.664199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.860 [2024-07-15 14:04:54.664226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.860 [2024-07-15 14:04:54.664257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.860 [2024-07-15 14:04:54.676301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.860 [2024-07-15 14:04:54.676328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.860 [2024-07-15 14:04:54.676360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.860 [2024-07-15 14:04:54.686927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f3c280) 00:25:59.860 [2024-07-15 14:04:54.686956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.860 [2024-07-15 14:04:54.686988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.860 00:25:59.860 Latency(us) 00:25:59.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.860 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:59.860 nvme0n1 : 2.01 20867.98 81.52 0.00 0.00 6125.34 3155.44 20194.80 00:25:59.860 =================================================================================================================== 00:25:59.860 Total : 20867.98 81.52 0.00 0.00 6125.34 3155.44 20194.80 00:25:59.860 0 00:26:00.118 14:04:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:00.118 14:04:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:00.118 14:04:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:00.118 14:04:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:00.118 | .driver_specific 00:26:00.118 | .nvme_error 00:26:00.118 | .status_code 00:26:00.118 | .command_transient_transport_error' 00:26:00.118 14:04:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 164 > 0 )) 00:26:00.118 14:04:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3853823 00:26:00.118 14:04:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3853823 ']' 00:26:00.118 14:04:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3853823 00:26:00.118 14:04:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:00.118 14:04:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:00.376 14:04:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3853823 00:26:00.376 14:04:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:00.376 14:04:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:00.376 14:04:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3853823' 00:26:00.376 killing process with pid 3853823 00:26:00.376 14:04:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3853823 00:26:00.376 Received shutdown signal, test time was about 2.000000 seconds 00:26:00.376 00:26:00.376 Latency(us) 00:26:00.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:00.376 =================================================================================================================== 00:26:00.376 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:00.376 14:04:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3853823 00:26:00.634 14:04:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:00.634 14:04:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:00.634 14:04:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:00.634 14:04:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:00.634 14:04:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:00.634 14:04:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3854230 00:26:00.634 14:04:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:00.634 14:04:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3854230 /var/tmp/bperf.sock 00:26:00.634 14:04:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3854230 ']' 00:26:00.634 14:04:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:00.634 14:04:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:00.634 14:04:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:00.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:00.634 14:04:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:00.634 14:04:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:00.635 [2024-07-15 14:04:55.285369] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:26:00.635 [2024-07-15 14:04:55.285447] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854230 ] 00:26:00.635 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:00.635 Zero copy mechanism will not be used. 00:26:00.635 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.635 [2024-07-15 14:04:55.342523] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.635 [2024-07-15 14:04:55.445401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.892 14:04:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:00.892 14:04:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:00.892 14:04:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:00.892 14:04:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:01.149 14:04:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:01.149 14:04:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.149 14:04:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:01.149 14:04:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.149 14:04:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:01.149 14:04:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:01.715 nvme0n1 00:26:01.715 14:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:01.715 14:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.715 14:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:01.715 14:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.715 14:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:01.715 14:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:01.715 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:01.715 Zero copy mechanism will not be used. 00:26:01.715 Running I/O for 2 seconds... 00:26:01.715 [2024-07-15 14:04:56.424107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.715 [2024-07-15 14:04:56.424166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.715 [2024-07-15 14:04:56.424185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.715 [2024-07-15 14:04:56.430472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.715 [2024-07-15 14:04:56.430508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.715 [2024-07-15 14:04:56.430539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.715 [2024-07-15 14:04:56.438112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.715 [2024-07-15 14:04:56.438140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.715 [2024-07-15 14:04:56.438180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.715 [2024-07-15 14:04:56.444732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.715 [2024-07-15 14:04:56.444769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.715 [2024-07-15 14:04:56.444807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.715 [2024-07-15 14:04:56.451589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.715 [2024-07-15 14:04:56.451616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.715 [2024-07-15 14:04:56.451647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.715 [2024-07-15 14:04:56.458211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.715 [2024-07-15 14:04:56.458238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.715 [2024-07-15 14:04:56.458268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.715 [2024-07-15 14:04:56.464453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.715 [2024-07-15 14:04:56.464480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.715 [2024-07-15 14:04:56.464510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.715 [2024-07-15 14:04:56.471697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.715 [2024-07-15 14:04:56.471745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.715 [2024-07-15 14:04:56.471763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.715 [2024-07-15 14:04:56.479892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.715 [2024-07-15 14:04:56.479932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.715 [2024-07-15 14:04:56.479962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.715 [2024-07-15 14:04:56.487056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.715 [2024-07-15 14:04:56.487083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.715 [2024-07-15 14:04:56.487113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.715 [2024-07-15 14:04:56.493808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.715 [2024-07-15 14:04:56.493837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.715 [2024-07-15 14:04:56.493872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.715 [2024-07-15 14:04:56.500281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.715 [2024-07-15 14:04:56.500307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.715 [2024-07-15 14:04:56.500337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.715 [2024-07-15 14:04:56.507111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.715 [2024-07-15 14:04:56.507139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.715 [2024-07-15 14:04:56.507175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.715 [2024-07-15 14:04:56.514965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.715 [2024-07-15 14:04:56.514994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.715 [2024-07-15 14:04:56.515025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.715 [2024-07-15 14:04:56.522389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.715 [2024-07-15 14:04:56.522416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.715 [2024-07-15 14:04:56.522446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.715 [2024-07-15 14:04:56.528747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.715 [2024-07-15 14:04:56.528775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.716 [2024-07-15 14:04:56.528807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.716 [2024-07-15 14:04:56.535644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.716 [2024-07-15 14:04:56.535670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.716 [2024-07-15 14:04:56.535700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.716 [2024-07-15 14:04:56.539927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.716 [2024-07-15 14:04:56.539954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.716 [2024-07-15 14:04:56.539985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.716 [2024-07-15 14:04:56.546103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.716 [2024-07-15 14:04:56.546131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.716 [2024-07-15 14:04:56.546162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.716 [2024-07-15 14:04:56.554230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.716 [2024-07-15 14:04:56.554264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.716 [2024-07-15 14:04:56.554280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.975 [2024-07-15 14:04:56.561890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.975 [2024-07-15 14:04:56.561921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.975 [2024-07-15 14:04:56.561953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.975 [2024-07-15 14:04:56.569323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.975 [2024-07-15 14:04:56.569352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.975 [2024-07-15 14:04:56.569393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.975 [2024-07-15 14:04:56.577109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.975 [2024-07-15 14:04:56.577137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.975 [2024-07-15 14:04:56.577167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.975 [2024-07-15 14:04:56.583638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.975 [2024-07-15 14:04:56.583680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.975 [2024-07-15 14:04:56.583705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.975 [2024-07-15 14:04:56.591208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.975 [2024-07-15 14:04:56.591236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.975 [2024-07-15 14:04:56.591265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.975 [2024-07-15 14:04:56.598684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.975 [2024-07-15 14:04:56.598711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.975 [2024-07-15 14:04:56.598760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.975 [2024-07-15 14:04:56.606542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.975 [2024-07-15 14:04:56.606569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.975 [2024-07-15 14:04:56.606600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.975 [2024-07-15 14:04:56.614424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.975 [2024-07-15 14:04:56.614462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.975 [2024-07-15 14:04:56.614492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.975 [2024-07-15 14:04:56.622433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.975 [2024-07-15 14:04:56.622471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.975 [2024-07-15 14:04:56.622502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.975 [2024-07-15 14:04:56.630339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.975 [2024-07-15 14:04:56.630369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.975 [2024-07-15 14:04:56.630406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.975 [2024-07-15 14:04:56.637923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.975 [2024-07-15 14:04:56.637952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.975 [2024-07-15 14:04:56.637983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.975 [2024-07-15 14:04:56.646506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.975 [2024-07-15 14:04:56.646533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.975 [2024-07-15 14:04:56.646563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.975 [2024-07-15 14:04:56.653581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.975 [2024-07-15 14:04:56.653609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.975 [2024-07-15 14:04:56.653645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.975 [2024-07-15 14:04:56.658173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.975 [2024-07-15 14:04:56.658222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.975 [2024-07-15 14:04:56.658239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.975 [2024-07-15 14:04:56.666216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.975 [2024-07-15 14:04:56.666267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.975 [2024-07-15 14:04:56.666282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.975 [2024-07-15 14:04:56.674833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.975 [2024-07-15 14:04:56.674862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.975 [2024-07-15 14:04:56.674893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.975 [2024-07-15 14:04:56.683486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.975 [2024-07-15 14:04:56.683527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.975 [2024-07-15 14:04:56.683556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.975 [2024-07-15 14:04:56.691443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.975 [2024-07-15 14:04:56.691471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.975 [2024-07-15 14:04:56.691507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.975 [2024-07-15 14:04:56.698041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.975 [2024-07-15 14:04:56.698077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.975 [2024-07-15 14:04:56.698102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.976 [2024-07-15 14:04:56.704858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.976 [2024-07-15 14:04:56.704887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.976 [2024-07-15 14:04:56.704919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.976 [2024-07-15 14:04:56.711784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.976 [2024-07-15 14:04:56.711813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.976 [2024-07-15 14:04:56.711829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.976 [2024-07-15 14:04:56.718922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.976 [2024-07-15 14:04:56.718951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.976 [2024-07-15 14:04:56.718967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.976 [2024-07-15 14:04:56.725868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.976 [2024-07-15 14:04:56.725897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.976 [2024-07-15 14:04:56.725914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.976 [2024-07-15 14:04:56.732482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.976 [2024-07-15 14:04:56.732510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.976 [2024-07-15 14:04:56.732540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.976 [2024-07-15 14:04:56.739725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.976 [2024-07-15 14:04:56.739782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.976 [2024-07-15 14:04:56.739800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.976 [2024-07-15 14:04:56.747474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.976 [2024-07-15 14:04:56.747503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.976 [2024-07-15 14:04:56.747534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.976 [2024-07-15 14:04:56.754650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.976 [2024-07-15 14:04:56.754678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.976 [2024-07-15 14:04:56.754709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.976 [2024-07-15 14:04:56.761424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.976 [2024-07-15 14:04:56.761452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.976 [2024-07-15 14:04:56.761481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.976 [2024-07-15 14:04:56.768278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.976 [2024-07-15 14:04:56.768305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.976 [2024-07-15 14:04:56.768336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.976 [2024-07-15 14:04:56.775164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.976 [2024-07-15 14:04:56.775193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.976 [2024-07-15 14:04:56.775223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.976 [2024-07-15 14:04:56.781942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.976 [2024-07-15 14:04:56.781970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.976 [2024-07-15 14:04:56.782001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.976 [2024-07-15 14:04:56.788810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.976 [2024-07-15 14:04:56.788841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.976 [2024-07-15 14:04:56.788873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.976 [2024-07-15 14:04:56.795865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.976 [2024-07-15 14:04:56.795894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.976 [2024-07-15 14:04:56.795927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.976 [2024-07-15 14:04:56.802542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.976 [2024-07-15 14:04:56.802569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.976 [2024-07-15 14:04:56.802599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.976 [2024-07-15 14:04:56.809190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:01.976 [2024-07-15 14:04:56.809217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.976 [2024-07-15 14:04:56.809247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.235 [2024-07-15 14:04:56.816358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.235 [2024-07-15 14:04:56.816388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.235 [2024-07-15 14:04:56.816433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.235 [2024-07-15 14:04:56.823758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.235 [2024-07-15 14:04:56.823795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.235 [2024-07-15 14:04:56.823826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.235 [2024-07-15 14:04:56.830729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.235 [2024-07-15 14:04:56.830784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.235 [2024-07-15 14:04:56.830800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.235 [2024-07-15 14:04:56.837573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.235 [2024-07-15 14:04:56.837600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.235 [2024-07-15 14:04:56.837638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.235 [2024-07-15 14:04:56.844321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.235 [2024-07-15 14:04:56.844348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.235 [2024-07-15 14:04:56.844378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.235 [2024-07-15 14:04:56.851346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.235 [2024-07-15 14:04:56.851374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.235 [2024-07-15 14:04:56.851405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.235 [2024-07-15 14:04:56.860198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.235 [2024-07-15 14:04:56.860226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.235 [2024-07-15 14:04:56.860256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.235 [2024-07-15 14:04:56.867504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.235 [2024-07-15 14:04:56.867532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.235 [2024-07-15 14:04:56.867563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.235 [2024-07-15 14:04:56.875470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.235 [2024-07-15 14:04:56.875497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.235 [2024-07-15 14:04:56.875528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.235 [2024-07-15 14:04:56.883441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.235 [2024-07-15 14:04:56.883479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.235 [2024-07-15 14:04:56.883511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.235 [2024-07-15 14:04:56.892222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.235 [2024-07-15 14:04:56.892265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.235 [2024-07-15 14:04:56.892285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.235 [2024-07-15 14:04:56.900291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.235 [2024-07-15 14:04:56.900319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.235 [2024-07-15 14:04:56.900349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.235 [2024-07-15 14:04:56.907106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.236 [2024-07-15 14:04:56.907133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.236 [2024-07-15 14:04:56.907163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.236 [2024-07-15 14:04:56.914698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.236 [2024-07-15 14:04:56.914747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.236 [2024-07-15 14:04:56.914765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.236 [2024-07-15 14:04:56.918586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.236 [2024-07-15 14:04:56.918621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.236 [2024-07-15 14:04:56.918651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.236 [2024-07-15 14:04:56.925890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.236 [2024-07-15 14:04:56.925925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.236 [2024-07-15 14:04:56.925955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.236 [2024-07-15 14:04:56.932905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.236 [2024-07-15 14:04:56.932933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.236 [2024-07-15 14:04:56.932964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.236 [2024-07-15 14:04:56.940897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.236 [2024-07-15 14:04:56.940933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.236 [2024-07-15 14:04:56.940964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.236 [2024-07-15 14:04:56.948362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.236 [2024-07-15 14:04:56.948390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.236 [2024-07-15 14:04:56.948420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.236 [2024-07-15 14:04:56.957610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.236 [2024-07-15 14:04:56.957638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.236 [2024-07-15 14:04:56.957675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.236 [2024-07-15 14:04:56.965967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.236 [2024-07-15 14:04:56.965996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.236 [2024-07-15 14:04:56.966027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.236 [2024-07-15 14:04:56.973230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.236 [2024-07-15 14:04:56.973258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.236 [2024-07-15 14:04:56.973294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.236 [2024-07-15 14:04:56.981249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.236 [2024-07-15 14:04:56.981277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.236 [2024-07-15 14:04:56.981307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.236 [2024-07-15 14:04:56.989326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.236 [2024-07-15 14:04:56.989368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.236 [2024-07-15 14:04:56.989385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.236 [2024-07-15 14:04:56.997833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.236 [2024-07-15 14:04:56.997861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.236 [2024-07-15 14:04:56.997892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.236 [2024-07-15 14:04:57.006458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.236 [2024-07-15 14:04:57.006486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.236 [2024-07-15 14:04:57.006517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.236 [2024-07-15 14:04:57.013729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.236 [2024-07-15 14:04:57.013765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.236 [2024-07-15 14:04:57.013804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.236 [2024-07-15 14:04:57.019921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.236 [2024-07-15 14:04:57.019950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.236 [2024-07-15 14:04:57.019980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.236 [2024-07-15 14:04:57.025296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.236 [2024-07-15 14:04:57.025322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.236 [2024-07-15 14:04:57.025353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.236 [2024-07-15 14:04:57.031259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.236 [2024-07-15 14:04:57.031285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.236 [2024-07-15 14:04:57.031314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.236 [2024-07-15 14:04:57.037170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.236 [2024-07-15 14:04:57.037197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.236 [2024-07-15 14:04:57.037211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.236 [2024-07-15 14:04:57.043179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.236 [2024-07-15 14:04:57.043206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.236 [2024-07-15 14:04:57.043236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.236 [2024-07-15 14:04:57.049597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.236 [2024-07-15 14:04:57.049624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.236 [2024-07-15 14:04:57.049654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.236 [2024-07-15 14:04:57.055993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.236 [2024-07-15 14:04:57.056021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.236 [2024-07-15 14:04:57.056055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.236 [2024-07-15 14:04:57.062389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.236 [2024-07-15 14:04:57.062415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.236 [2024-07-15 14:04:57.062445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.236 [2024-07-15 14:04:57.069398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.236 [2024-07-15 14:04:57.069428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.236 [2024-07-15 14:04:57.069458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.077327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.077369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.077386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.084834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.084863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.084894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.092576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.092603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.092633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.100713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.100763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.100779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.109137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.109162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.109192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.117899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.117926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.117956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.126813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.126839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.126870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.135849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.135875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.135905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.145327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.145353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.145383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.155613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.155641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.155673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.165516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.165541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.165571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.175435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.175461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.175491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.185597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.185624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.185654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.195977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.196005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.196036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.205813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.205840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.205870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.215781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.215807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.215836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.225858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.225884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.225920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.235983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.236009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.236039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.246271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.246298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.246328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.256144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.256170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.256199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.266189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.266216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.266245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.276197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.276223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.276253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.286074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.286101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.286131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.295972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.295999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.296030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.303282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.303309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.303340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.309848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.309880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.309912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.316105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.316131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.316160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.322103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.322129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.322159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.327935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.327961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.327991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.495 [2024-07-15 14:04:57.334142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.495 [2024-07-15 14:04:57.334172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.495 [2024-07-15 14:04:57.334203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.754 [2024-07-15 14:04:57.340402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.754 [2024-07-15 14:04:57.340430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.754 [2024-07-15 14:04:57.340471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.754 [2024-07-15 14:04:57.347109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.754 [2024-07-15 14:04:57.347136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.754 [2024-07-15 14:04:57.347167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.754 [2024-07-15 14:04:57.354504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.754 [2024-07-15 14:04:57.354531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.754 [2024-07-15 14:04:57.354562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.754 [2024-07-15 14:04:57.361013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.754 [2024-07-15 14:04:57.361055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.755 [2024-07-15 14:04:57.361069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.755 [2024-07-15 14:04:57.367644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.755 [2024-07-15 14:04:57.367670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.755 [2024-07-15 14:04:57.367700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.755 [2024-07-15 14:04:57.374902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.755 [2024-07-15 14:04:57.374929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.755 [2024-07-15 14:04:57.374960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.755 [2024-07-15 14:04:57.382082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.755 [2024-07-15 14:04:57.382109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.755 [2024-07-15 14:04:57.382138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.755 [2024-07-15 14:04:57.389528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.755 [2024-07-15 14:04:57.389556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.755 [2024-07-15 14:04:57.389586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.755 [2024-07-15 14:04:57.396664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.755 [2024-07-15 14:04:57.396691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.755 [2024-07-15 14:04:57.396721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.755 [2024-07-15 14:04:57.404201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.755 [2024-07-15 14:04:57.404228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.755 [2024-07-15 14:04:57.404258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.755 [2024-07-15 14:04:57.412001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.755 [2024-07-15 14:04:57.412029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.755 [2024-07-15 14:04:57.412044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.755 [2024-07-15 14:04:57.420656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.755 [2024-07-15 14:04:57.420686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.755 [2024-07-15 14:04:57.420716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.755 [2024-07-15 14:04:57.428610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.755 [2024-07-15 14:04:57.428638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.755 [2024-07-15 14:04:57.428674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.755 [2024-07-15 14:04:57.436547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.755 [2024-07-15 14:04:57.436574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.755 [2024-07-15 14:04:57.436605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.755 [2024-07-15 14:04:57.444864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.755 [2024-07-15 14:04:57.444893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.755 [2024-07-15 14:04:57.444909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.755 [2024-07-15 14:04:57.452314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.755 [2024-07-15 14:04:57.452340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.755 [2024-07-15 14:04:57.452370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.755 [2024-07-15 14:04:57.459080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.755 [2024-07-15 14:04:57.459120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.755 [2024-07-15 14:04:57.459135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.755 [2024-07-15 14:04:57.466361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.755 [2024-07-15 14:04:57.466387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.755 [2024-07-15 14:04:57.466417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.755 [2024-07-15 14:04:57.473587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.755 [2024-07-15 14:04:57.473614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.755 [2024-07-15 14:04:57.473643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.755 [2024-07-15 14:04:57.478167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.755 [2024-07-15 14:04:57.478194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.755 [2024-07-15 14:04:57.478224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.755 [2024-07-15 14:04:57.486406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.755 [2024-07-15 14:04:57.486434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.755 [2024-07-15 14:04:57.486464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.755 [2024-07-15 14:04:57.494011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.755 [2024-07-15 14:04:57.494053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.755 [2024-07-15 14:04:57.494069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.755 [2024-07-15 14:04:57.502840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.755 [2024-07-15 14:04:57.502868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.755 [2024-07-15 14:04:57.502898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.755 [2024-07-15 14:04:57.511892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.755 [2024-07-15 14:04:57.511920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.755 [2024-07-15 14:04:57.511950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.755 [2024-07-15 14:04:57.521024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.755 [2024-07-15 14:04:57.521066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.755 [2024-07-15 14:04:57.521081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.755 [2024-07-15 14:04:57.529320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.755 [2024-07-15 14:04:57.529347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.755 [2024-07-15 14:04:57.529377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.755 [2024-07-15 14:04:57.537896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.755 [2024-07-15 14:04:57.537923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.755 [2024-07-15 14:04:57.537954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.755 [2024-07-15 14:04:57.546126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.755 [2024-07-15 14:04:57.546152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.755 [2024-07-15 14:04:57.546182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.755 [2024-07-15 14:04:57.554867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.755 [2024-07-15 14:04:57.554896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.755 [2024-07-15 14:04:57.554926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.755 [2024-07-15 14:04:57.563246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.755 [2024-07-15 14:04:57.563274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.755 [2024-07-15 14:04:57.563312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.755 [2024-07-15 14:04:57.570904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.755 [2024-07-15 14:04:57.570932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.755 [2024-07-15 14:04:57.570963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.755 [2024-07-15 14:04:57.579212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.755 [2024-07-15 14:04:57.579239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.755 [2024-07-15 14:04:57.579270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.756 [2024-07-15 14:04:57.587821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:02.756 [2024-07-15 14:04:57.587848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.756 [2024-07-15 14:04:57.587878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.049 [2024-07-15 14:04:57.596843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.049 [2024-07-15 14:04:57.596873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.049 [2024-07-15 14:04:57.596904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.049 [2024-07-15 14:04:57.605513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.049 [2024-07-15 14:04:57.605541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.049 [2024-07-15 14:04:57.605571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.049 [2024-07-15 14:04:57.614666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.049 [2024-07-15 14:04:57.614694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.050 [2024-07-15 14:04:57.614724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.050 [2024-07-15 14:04:57.624393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.050 [2024-07-15 14:04:57.624420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.050 [2024-07-15 14:04:57.624449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.050 [2024-07-15 14:04:57.634467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.050 [2024-07-15 14:04:57.634494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.050 [2024-07-15 14:04:57.634524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.050 [2024-07-15 14:04:57.644339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.050 [2024-07-15 14:04:57.644372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.050 [2024-07-15 14:04:57.644405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.050 [2024-07-15 14:04:57.653885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.050 [2024-07-15 14:04:57.653913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.050 [2024-07-15 14:04:57.653944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.050 [2024-07-15 14:04:57.663659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.050 [2024-07-15 14:04:57.663685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.050 [2024-07-15 14:04:57.663714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.050 [2024-07-15 14:04:57.673907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.050 [2024-07-15 14:04:57.673935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.050 [2024-07-15 14:04:57.673966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.050 [2024-07-15 14:04:57.684291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.050 [2024-07-15 14:04:57.684319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.050 [2024-07-15 14:04:57.684350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.050 [2024-07-15 14:04:57.694628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.050 [2024-07-15 14:04:57.694655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.050 [2024-07-15 14:04:57.694686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.050 [2024-07-15 14:04:57.704358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.050 [2024-07-15 14:04:57.704386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.050 [2024-07-15 14:04:57.704416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.050 [2024-07-15 14:04:57.714521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.050 [2024-07-15 14:04:57.714548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.050 [2024-07-15 14:04:57.714578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.050 [2024-07-15 14:04:57.724986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.050 [2024-07-15 14:04:57.725014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.050 [2024-07-15 14:04:57.725029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.050 [2024-07-15 14:04:57.735319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.050 [2024-07-15 14:04:57.735346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.050 [2024-07-15 14:04:57.735376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.050 [2024-07-15 14:04:57.745376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.050 [2024-07-15 14:04:57.745402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.050 [2024-07-15 14:04:57.745432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.050 [2024-07-15 14:04:57.755823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.050 [2024-07-15 14:04:57.755852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.050 [2024-07-15 14:04:57.755882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.050 [2024-07-15 14:04:57.765925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.050 [2024-07-15 14:04:57.765953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.050 [2024-07-15 14:04:57.765983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.050 [2024-07-15 14:04:57.776430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.050 [2024-07-15 14:04:57.776457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.050 [2024-07-15 14:04:57.776487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.050 [2024-07-15 14:04:57.786795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.050 [2024-07-15 14:04:57.786822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.050 [2024-07-15 14:04:57.786852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.050 [2024-07-15 14:04:57.796668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.050 [2024-07-15 14:04:57.796695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.050 [2024-07-15 14:04:57.796726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.050 [2024-07-15 14:04:57.803911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.050 [2024-07-15 14:04:57.803940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.050 [2024-07-15 14:04:57.803970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.050 [2024-07-15 14:04:57.811950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.050 [2024-07-15 14:04:57.811979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.050 [2024-07-15 14:04:57.812016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.050 [2024-07-15 14:04:57.820095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.050 [2024-07-15 14:04:57.820123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.050 [2024-07-15 14:04:57.820155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.050 [2024-07-15 14:04:57.827386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.050 [2024-07-15 14:04:57.827414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.050 [2024-07-15 14:04:57.827445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.050 [2024-07-15 14:04:57.835770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.050 [2024-07-15 14:04:57.835800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.050 [2024-07-15 14:04:57.835832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.050 [2024-07-15 14:04:57.843247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.050 [2024-07-15 14:04:57.843274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.050 [2024-07-15 14:04:57.843305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.050 [2024-07-15 14:04:57.851110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.050 [2024-07-15 14:04:57.851142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.050 [2024-07-15 14:04:57.851159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.050 [2024-07-15 14:04:57.858784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.050 [2024-07-15 14:04:57.858819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.050 [2024-07-15 14:04:57.858846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.323 [2024-07-15 14:04:57.865546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.323 [2024-07-15 14:04:57.865580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.323 [2024-07-15 14:04:57.865620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.323 [2024-07-15 14:04:57.870299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.323 [2024-07-15 14:04:57.870328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.323 [2024-07-15 14:04:57.870359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.323 [2024-07-15 14:04:57.877763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.323 [2024-07-15 14:04:57.877810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.323 [2024-07-15 14:04:57.877843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.323 [2024-07-15 14:04:57.885633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.323 [2024-07-15 14:04:57.885662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.323 [2024-07-15 14:04:57.885692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.323 [2024-07-15 14:04:57.893281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.323 [2024-07-15 14:04:57.893312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.323 [2024-07-15 14:04:57.893344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.323 [2024-07-15 14:04:57.901372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.323 [2024-07-15 14:04:57.901402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.323 [2024-07-15 14:04:57.901433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.323 [2024-07-15 14:04:57.909074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.323 [2024-07-15 14:04:57.909118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.323 [2024-07-15 14:04:57.909141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.323 [2024-07-15 14:04:57.916909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.323 [2024-07-15 14:04:57.916941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.323 [2024-07-15 14:04:57.916959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.323 [2024-07-15 14:04:57.924222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.323 [2024-07-15 14:04:57.924250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.323 [2024-07-15 14:04:57.924281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.323 [2024-07-15 14:04:57.931542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.323 [2024-07-15 14:04:57.931570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.323 [2024-07-15 14:04:57.931601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.323 [2024-07-15 14:04:57.938266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.323 [2024-07-15 14:04:57.938293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.323 [2024-07-15 14:04:57.938323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.323 [2024-07-15 14:04:57.945605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.323 [2024-07-15 14:04:57.945634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.323 [2024-07-15 14:04:57.945665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.323 [2024-07-15 14:04:57.949609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.323 [2024-07-15 14:04:57.949637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.323 [2024-07-15 14:04:57.949667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.323 [2024-07-15 14:04:57.956433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.323 [2024-07-15 14:04:57.956460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.323 [2024-07-15 14:04:57.956492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.323 [2024-07-15 14:04:57.963427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.323 [2024-07-15 14:04:57.963454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.323 [2024-07-15 14:04:57.963484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.323 [2024-07-15 14:04:57.970439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.323 [2024-07-15 14:04:57.970465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.323 [2024-07-15 14:04:57.970496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.323 [2024-07-15 14:04:57.977979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.323 [2024-07-15 14:04:57.978014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.323 [2024-07-15 14:04:57.978046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.323 [2024-07-15 14:04:57.985529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.324 [2024-07-15 14:04:57.985556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.324 [2024-07-15 14:04:57.985586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.324 [2024-07-15 14:04:57.992789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.324 [2024-07-15 14:04:57.992815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.324 [2024-07-15 14:04:57.992844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.324 [2024-07-15 14:04:58.000361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.324 [2024-07-15 14:04:58.000387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.324 [2024-07-15 14:04:58.000422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.324 [2024-07-15 14:04:58.008339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.324 [2024-07-15 14:04:58.008365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.324 [2024-07-15 14:04:58.008395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.324 [2024-07-15 14:04:58.015611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.324 [2024-07-15 14:04:58.015638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.324 [2024-07-15 14:04:58.015667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.324 [2024-07-15 14:04:58.021665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.324 [2024-07-15 14:04:58.021691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.324 [2024-07-15 14:04:58.021722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.324 [2024-07-15 14:04:58.028181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.324 [2024-07-15 14:04:58.028210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.324 [2024-07-15 14:04:58.028241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.324 [2024-07-15 14:04:58.036861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.324 [2024-07-15 14:04:58.036892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.324 [2024-07-15 14:04:58.036924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.324 [2024-07-15 14:04:58.044253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.324 [2024-07-15 14:04:58.044281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.324 [2024-07-15 14:04:58.044312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.324 [2024-07-15 14:04:58.051091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.324 [2024-07-15 14:04:58.051119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.324 [2024-07-15 14:04:58.051135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.324 [2024-07-15 14:04:58.057351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.324 [2024-07-15 14:04:58.057378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.324 [2024-07-15 14:04:58.057409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.324 [2024-07-15 14:04:58.063960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.324 [2024-07-15 14:04:58.063988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.324 [2024-07-15 14:04:58.064025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.324 [2024-07-15 14:04:58.071652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.324 [2024-07-15 14:04:58.071694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.324 [2024-07-15 14:04:58.071709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.324 [2024-07-15 14:04:58.079696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.324 [2024-07-15 14:04:58.079745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.324 [2024-07-15 14:04:58.079764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.324 [2024-07-15 14:04:58.087967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.324 [2024-07-15 14:04:58.087995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.324 [2024-07-15 14:04:58.088026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.324 [2024-07-15 14:04:58.096401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.324 [2024-07-15 14:04:58.096428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.324 [2024-07-15 14:04:58.096459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.324 [2024-07-15 14:04:58.105345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.324 [2024-07-15 14:04:58.105372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.324 [2024-07-15 14:04:58.105403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.324 [2024-07-15 14:04:58.114506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.324 [2024-07-15 14:04:58.114534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.324 [2024-07-15 14:04:58.114564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.324 [2024-07-15 14:04:58.124421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.324 [2024-07-15 14:04:58.124448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.324 [2024-07-15 14:04:58.124478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.324 [2024-07-15 14:04:58.134068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.324 [2024-07-15 14:04:58.134097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.324 [2024-07-15 14:04:58.134132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.324 [2024-07-15 14:04:58.144021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.324 [2024-07-15 14:04:58.144064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.324 [2024-07-15 14:04:58.144080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.324 [2024-07-15 14:04:58.151870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.324 [2024-07-15 14:04:58.151915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.324 [2024-07-15 14:04:58.151932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.589 [2024-07-15 14:04:58.159853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.589 [2024-07-15 14:04:58.159883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.589 [2024-07-15 14:04:58.159914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.589 [2024-07-15 14:04:58.169073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.589 [2024-07-15 14:04:58.169110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.589 [2024-07-15 14:04:58.169141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.589 [2024-07-15 14:04:58.178659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.589 [2024-07-15 14:04:58.178688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.589 [2024-07-15 14:04:58.178719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.589 [2024-07-15 14:04:58.188095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.590 [2024-07-15 14:04:58.188122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.590 [2024-07-15 14:04:58.188162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.590 [2024-07-15 14:04:58.196828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.590 [2024-07-15 14:04:58.196857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.590 [2024-07-15 14:04:58.196895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.590 [2024-07-15 14:04:58.207131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.590 [2024-07-15 14:04:58.207160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.590 [2024-07-15 14:04:58.207190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.590 [2024-07-15 14:04:58.216688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.590 [2024-07-15 14:04:58.216722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.590 [2024-07-15 14:04:58.216760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.590 [2024-07-15 14:04:58.226203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.590 [2024-07-15 14:04:58.226231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.590 [2024-07-15 14:04:58.226261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.590 [2024-07-15 14:04:58.235815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.590 [2024-07-15 14:04:58.235844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.590 [2024-07-15 14:04:58.235876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.590 [2024-07-15 14:04:58.245637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.590 [2024-07-15 14:04:58.245665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.590 [2024-07-15 14:04:58.245696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.590 [2024-07-15 14:04:58.255342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.590 [2024-07-15 14:04:58.255384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.590 [2024-07-15 14:04:58.255401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.590 [2024-07-15 14:04:58.265138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.590 [2024-07-15 14:04:58.265166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.590 [2024-07-15 14:04:58.265197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.590 [2024-07-15 14:04:58.274047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.590 [2024-07-15 14:04:58.274076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.590 [2024-07-15 14:04:58.274091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.590 [2024-07-15 14:04:58.284171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.590 [2024-07-15 14:04:58.284200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.590 [2024-07-15 14:04:58.284238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.590 [2024-07-15 14:04:58.294274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.590 [2024-07-15 14:04:58.294303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.590 [2024-07-15 14:04:58.294334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.590 [2024-07-15 14:04:58.304430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.590 [2024-07-15 14:04:58.304458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.590 [2024-07-15 14:04:58.304489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.590 [2024-07-15 14:04:58.313618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.590 [2024-07-15 14:04:58.313645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.590 [2024-07-15 14:04:58.313676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.590 [2024-07-15 14:04:58.322526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.590 [2024-07-15 14:04:58.322554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.590 [2024-07-15 14:04:58.322584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.590 [2024-07-15 14:04:58.331245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.590 [2024-07-15 14:04:58.331288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.590 [2024-07-15 14:04:58.331303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.590 [2024-07-15 14:04:58.340819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.590 [2024-07-15 14:04:58.340847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.590 [2024-07-15 14:04:58.340877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.590 [2024-07-15 14:04:58.350583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.590 [2024-07-15 14:04:58.350612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.590 [2024-07-15 14:04:58.350642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.590 [2024-07-15 14:04:58.359958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.590 [2024-07-15 14:04:58.359987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.590 [2024-07-15 14:04:58.360020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.590 [2024-07-15 14:04:58.369499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.590 [2024-07-15 14:04:58.369529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.590 [2024-07-15 14:04:58.369560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.591 [2024-07-15 14:04:58.379678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.591 [2024-07-15 14:04:58.379706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.591 [2024-07-15 14:04:58.379751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.591 [2024-07-15 14:04:58.388958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.591 [2024-07-15 14:04:58.388986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.591 [2024-07-15 14:04:58.389017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:03.591 [2024-07-15 14:04:58.398387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.591 [2024-07-15 14:04:58.398415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.591 [2024-07-15 14:04:58.398445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:03.591 [2024-07-15 14:04:58.407921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.591 [2024-07-15 14:04:58.407949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.591 [2024-07-15 14:04:58.407980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:03.591 [2024-07-15 14:04:58.417465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x953d60) 00:26:03.591 [2024-07-15 14:04:58.417493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:03.591 [2024-07-15 14:04:58.417522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:03.591 00:26:03.591 Latency(us) 00:26:03.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:03.591 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:03.591 nvme0n1 : 2.00 3872.90 484.11 0.00 0.00 4126.25 676.60 10874.12 00:26:03.591 =================================================================================================================== 00:26:03.591 Total : 3872.90 484.11 0.00 0.00 4126.25 676.60 10874.12 00:26:03.591 0 00:26:03.851 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:03.851 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:03.851 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:03.851 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:03.851 | .driver_specific 00:26:03.851 | .nvme_error 00:26:03.851 | .status_code 00:26:03.851 | .command_transient_transport_error' 00:26:04.109 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 250 > 0 )) 00:26:04.109 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3854230 00:26:04.109 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3854230 ']' 00:26:04.109 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3854230 00:26:04.109 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:04.109 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:04.109 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3854230 00:26:04.109 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:04.109 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:04.109 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3854230' 00:26:04.109 killing process with pid 3854230 00:26:04.110 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3854230 00:26:04.110 Received shutdown signal, test time was about 2.000000 seconds 00:26:04.110 00:26:04.110 Latency(us) 00:26:04.110 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.110 =================================================================================================================== 00:26:04.110 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:04.110 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3854230 00:26:04.367 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:04.367 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:04.367 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:04.367 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:04.367 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:04.367 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3854663 00:26:04.367 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:04.367 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3854663 /var/tmp/bperf.sock 00:26:04.367 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3854663 ']' 00:26:04.367 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:04.367 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:04.367 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:04.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:04.367 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:04.367 14:04:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:04.367 [2024-07-15 14:04:59.036383] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:26:04.367 [2024-07-15 14:04:59.036475] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854663 ] 00:26:04.367 EAL: No free 2048 kB hugepages reported on node 1 00:26:04.367 [2024-07-15 14:04:59.096468] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.367 [2024-07-15 14:04:59.200978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:04.625 14:04:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:04.625 14:04:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:04.625 14:04:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:04.625 14:04:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:04.883 14:04:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:04.883 14:04:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.883 14:04:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:04.883 14:04:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.883 14:04:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:04.883 14:04:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:05.448 nvme0n1 00:26:05.448 14:05:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:05.448 14:05:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.448 14:05:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:05.448 14:05:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.448 14:05:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:05.448 14:05:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:05.448 Running I/O for 2 seconds... 00:26:05.448 [2024-07-15 14:05:00.195707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190ee5c8 00:26:05.448 [2024-07-15 14:05:00.196628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.448 [2024-07-15 14:05:00.196674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:05.448 [2024-07-15 14:05:00.207941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f7970 00:26:05.448 [2024-07-15 14:05:00.208644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.448 [2024-07-15 14:05:00.208671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:05.448 [2024-07-15 14:05:00.219827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e4de8 00:26:05.448 [2024-07-15 14:05:00.220683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.448 [2024-07-15 14:05:00.220716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:05.448 [2024-07-15 14:05:00.230565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f2d80 00:26:05.448 [2024-07-15 14:05:00.232116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.448 [2024-07-15 14:05:00.232142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:05.448 [2024-07-15 14:05:00.240531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fe720 00:26:05.448 [2024-07-15 14:05:00.241277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.448 [2024-07-15 14:05:00.241306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:05.448 [2024-07-15 14:05:00.252665] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e88f8 00:26:05.448 [2024-07-15 14:05:00.253572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.448 [2024-07-15 14:05:00.253614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:05.448 [2024-07-15 14:05:00.263983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e2c28 00:26:05.448 [2024-07-15 14:05:00.265079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.448 [2024-07-15 14:05:00.265104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:05.448 [2024-07-15 14:05:00.274254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fd640 00:26:05.448 [2024-07-15 14:05:00.275335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.448 [2024-07-15 14:05:00.275360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:05.448 [2024-07-15 14:05:00.285684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f9f68 00:26:05.448 [2024-07-15 14:05:00.286935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.448 [2024-07-15 14:05:00.286973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:05.706 [2024-07-15 14:05:00.296256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190de8a8 00:26:05.706 [2024-07-15 14:05:00.296985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.706 [2024-07-15 14:05:00.297011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:05.706 [2024-07-15 14:05:00.307236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190ea248 00:26:05.706 [2024-07-15 14:05:00.307879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.706 [2024-07-15 14:05:00.307904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:05.706 [2024-07-15 14:05:00.319644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e4578 00:26:05.706 [2024-07-15 14:05:00.321163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.706 [2024-07-15 14:05:00.321197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:05.706 [2024-07-15 14:05:00.330676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e6300 00:26:05.706 [2024-07-15 14:05:00.332143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.706 [2024-07-15 14:05:00.332168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:05.706 [2024-07-15 14:05:00.339782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f8a50 00:26:05.706 [2024-07-15 14:05:00.340392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.706 [2024-07-15 14:05:00.340416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:05.706 [2024-07-15 14:05:00.350991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f2948 00:26:05.706 [2024-07-15 14:05:00.351728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.706 [2024-07-15 14:05:00.351775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:05.706 [2024-07-15 14:05:00.363324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f81e0 00:26:05.706 [2024-07-15 14:05:00.364959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.706 [2024-07-15 14:05:00.364984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:05.706 [2024-07-15 14:05:00.373352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190ed920 00:26:05.706 [2024-07-15 14:05:00.374491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.706 [2024-07-15 14:05:00.374516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:05.706 [2024-07-15 14:05:00.384105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e5a90 00:26:05.706 [2024-07-15 14:05:00.385248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.706 [2024-07-15 14:05:00.385272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:05.706 [2024-07-15 14:05:00.395275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f7da8 00:26:05.706 [2024-07-15 14:05:00.396568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.706 [2024-07-15 14:05:00.396592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:05.706 [2024-07-15 14:05:00.406633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e0a68 00:26:05.706 [2024-07-15 14:05:00.408147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.706 [2024-07-15 14:05:00.408172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.706 [2024-07-15 14:05:00.415550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f7100 00:26:05.706 [2024-07-15 14:05:00.416475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.707 [2024-07-15 14:05:00.416499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:05.707 [2024-07-15 14:05:00.427935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e12d8 00:26:05.707 [2024-07-15 14:05:00.429419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.707 [2024-07-15 14:05:00.429448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:05.707 [2024-07-15 14:05:00.439305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190ed920 00:26:05.707 [2024-07-15 14:05:00.440866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.707 [2024-07-15 14:05:00.440892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.707 [2024-07-15 14:05:00.449373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e73e0 00:26:05.707 [2024-07-15 14:05:00.450572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.707 [2024-07-15 14:05:00.450597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:05.707 [2024-07-15 14:05:00.460340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190df118 00:26:05.707 [2024-07-15 14:05:00.461324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.707 [2024-07-15 14:05:00.461349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.707 [2024-07-15 14:05:00.471344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190ee5c8 00:26:05.707 [2024-07-15 14:05:00.472680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.707 [2024-07-15 14:05:00.472704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.707 [2024-07-15 14:05:00.482140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fe2e8 00:26:05.707 [2024-07-15 14:05:00.483424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.707 [2024-07-15 14:05:00.483449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.707 [2024-07-15 14:05:00.493281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190ef270 00:26:05.707 [2024-07-15 14:05:00.494688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.707 [2024-07-15 14:05:00.494713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:05.707 [2024-07-15 14:05:00.502143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e5658 00:26:05.707 [2024-07-15 14:05:00.503036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.707 [2024-07-15 14:05:00.503061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:05.707 [2024-07-15 14:05:00.512424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f6cc8 00:26:05.707 [2024-07-15 14:05:00.513332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.707 [2024-07-15 14:05:00.513356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:05.707 [2024-07-15 14:05:00.524413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190ef6a8 00:26:05.707 [2024-07-15 14:05:00.525477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.707 [2024-07-15 14:05:00.525502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:05.707 [2024-07-15 14:05:00.535587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e6b70 00:26:05.707 [2024-07-15 14:05:00.536713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.707 [2024-07-15 14:05:00.536764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:05.707 [2024-07-15 14:05:00.546153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e12d8 00:26:05.965 [2024-07-15 14:05:00.547417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.965 [2024-07-15 14:05:00.547459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:05.965 [2024-07-15 14:05:00.558621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fc560 00:26:05.965 [2024-07-15 14:05:00.559915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.965 [2024-07-15 14:05:00.559941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:05.965 [2024-07-15 14:05:00.568689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f2948 00:26:05.965 [2024-07-15 14:05:00.570033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.965 [2024-07-15 14:05:00.570072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:05.965 [2024-07-15 14:05:00.578797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190feb58 00:26:05.965 [2024-07-15 14:05:00.579621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.965 [2024-07-15 14:05:00.579646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:05.965 [2024-07-15 14:05:00.589992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fe2e8 00:26:05.965 [2024-07-15 14:05:00.590744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.965 [2024-07-15 14:05:00.590771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:05.965 [2024-07-15 14:05:00.601625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fb048 00:26:05.965 [2024-07-15 14:05:00.602487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.965 [2024-07-15 14:05:00.602523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:05.965 [2024-07-15 14:05:00.614482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e0a68 00:26:05.965 [2024-07-15 14:05:00.616150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.965 [2024-07-15 14:05:00.616175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:05.965 [2024-07-15 14:05:00.625095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190ea248 00:26:05.965 [2024-07-15 14:05:00.626575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.965 [2024-07-15 14:05:00.626600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 14:05:00.636419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190ebfd0 00:26:05.966 [2024-07-15 14:05:00.637854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.966 [2024-07-15 14:05:00.637879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 14:05:00.646697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f9b30 00:26:05.966 [2024-07-15 14:05:00.648148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.966 [2024-07-15 14:05:00.648172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 14:05:00.658035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e5220 00:26:05.966 [2024-07-15 14:05:00.659618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.966 [2024-07-15 14:05:00.659643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 14:05:00.668183] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f0788 00:26:05.966 [2024-07-15 14:05:00.669366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.966 [2024-07-15 14:05:00.669391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 14:05:00.678838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f7da8 00:26:05.966 [2024-07-15 14:05:00.680019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.966 [2024-07-15 14:05:00.680058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 14:05:00.689638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fa3a0 00:26:05.966 [2024-07-15 14:05:00.690778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.966 [2024-07-15 14:05:00.690803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 14:05:00.700476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e5658 00:26:05.966 [2024-07-15 14:05:00.701598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.966 [2024-07-15 14:05:00.701622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 14:05:00.710603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190ddc00 00:26:05.966 [2024-07-15 14:05:00.711782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.966 [2024-07-15 14:05:00.711822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 14:05:00.722759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e88f8 00:26:05.966 [2024-07-15 14:05:00.724075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.966 [2024-07-15 14:05:00.724099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 14:05:00.733955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e5658 00:26:05.966 [2024-07-15 14:05:00.735443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.966 [2024-07-15 14:05:00.735479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 14:05:00.744354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fef90 00:26:05.966 [2024-07-15 14:05:00.745830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.966 [2024-07-15 14:05:00.745857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 14:05:00.754688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f7100 00:26:05.966 [2024-07-15 14:05:00.755755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.966 [2024-07-15 14:05:00.755784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 14:05:00.765777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e5658 00:26:05.966 [2024-07-15 14:05:00.766816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.966 [2024-07-15 14:05:00.766842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 14:05:00.776908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fa3a0 00:26:05.966 [2024-07-15 14:05:00.777931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.966 [2024-07-15 14:05:00.777958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 14:05:00.788510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f7970 00:26:05.966 [2024-07-15 14:05:00.789625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.966 [2024-07-15 14:05:00.789651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:05.966 [2024-07-15 14:05:00.799830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fc998 00:26:05.966 [2024-07-15 14:05:00.800983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.966 [2024-07-15 14:05:00.801024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:06.226 [2024-07-15 14:05:00.813115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f46d0 00:26:06.226 [2024-07-15 14:05:00.814785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.226 [2024-07-15 14:05:00.814822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:06.226 [2024-07-15 14:05:00.824331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f1430 00:26:06.226 [2024-07-15 14:05:00.826034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.226 [2024-07-15 14:05:00.826063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:06.226 [2024-07-15 14:05:00.833494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f2948 00:26:06.226 [2024-07-15 14:05:00.834299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.226 [2024-07-15 14:05:00.834324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:06.226 [2024-07-15 14:05:00.844704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f9b30 00:26:06.226 [2024-07-15 14:05:00.845742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.226 [2024-07-15 14:05:00.845767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:06.226 [2024-07-15 14:05:00.855144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fe2e8 00:26:06.226 [2024-07-15 14:05:00.856848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.226 [2024-07-15 14:05:00.856874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:06.226 [2024-07-15 14:05:00.864763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fc128 00:26:06.226 [2024-07-15 14:05:00.865631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.226 [2024-07-15 14:05:00.865655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:06.226 [2024-07-15 14:05:00.877208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190ddc00 00:26:06.226 [2024-07-15 14:05:00.878265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.226 [2024-07-15 14:05:00.878291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:06.226 [2024-07-15 14:05:00.888629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f1430 00:26:06.226 [2024-07-15 14:05:00.889710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.226 [2024-07-15 14:05:00.889763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:06.226 [2024-07-15 14:05:00.898820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190df550 00:26:06.226 [2024-07-15 14:05:00.899880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.226 [2024-07-15 14:05:00.899905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:06.226 [2024-07-15 14:05:00.910921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e5220 00:26:06.226 [2024-07-15 14:05:00.912246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.226 [2024-07-15 14:05:00.912270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:06.226 [2024-07-15 14:05:00.922148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fa7d8 00:26:06.226 [2024-07-15 14:05:00.923562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.226 [2024-07-15 14:05:00.923598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:06.226 [2024-07-15 14:05:00.932426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e5658 00:26:06.226 [2024-07-15 14:05:00.933847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.226 [2024-07-15 14:05:00.933873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:06.226 [2024-07-15 14:05:00.943506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e0a68 00:26:06.226 [2024-07-15 14:05:00.944942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.226 [2024-07-15 14:05:00.944967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:06.226 [2024-07-15 14:05:00.954648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fc998 00:26:06.226 [2024-07-15 14:05:00.956101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.226 [2024-07-15 14:05:00.956134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:06.226 [2024-07-15 14:05:00.965520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e9e10 00:26:06.226 [2024-07-15 14:05:00.967097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.226 [2024-07-15 14:05:00.967121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:06.226 [2024-07-15 14:05:00.975657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fef90 00:26:06.226 [2024-07-15 14:05:00.976787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.226 [2024-07-15 14:05:00.976812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:06.226 [2024-07-15 14:05:00.986367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e3498 00:26:06.226 [2024-07-15 14:05:00.987514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.226 [2024-07-15 14:05:00.987538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:06.226 [2024-07-15 14:05:00.997246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e0a68 00:26:06.226 [2024-07-15 14:05:00.998389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.226 [2024-07-15 14:05:00.998414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:06.226 [2024-07-15 14:05:01.008407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e23b8 00:26:06.226 [2024-07-15 14:05:01.009418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.226 [2024-07-15 14:05:01.009443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:06.226 [2024-07-15 14:05:01.020184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fdeb0 00:26:06.226 [2024-07-15 14:05:01.021594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.226 [2024-07-15 14:05:01.021619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:06.226 [2024-07-15 14:05:01.029163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190ed0b0 00:26:06.226 [2024-07-15 14:05:01.030038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.226 [2024-07-15 14:05:01.030063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:06.226 [2024-07-15 14:05:01.039995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e1710 00:26:06.226 [2024-07-15 14:05:01.040850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.226 [2024-07-15 14:05:01.040875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:06.226 [2024-07-15 14:05:01.050850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f31b8 00:26:06.226 [2024-07-15 14:05:01.051701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.226 [2024-07-15 14:05:01.051745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:06.226 [2024-07-15 14:05:01.061829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fb8b8 00:26:06.226 [2024-07-15 14:05:01.062748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.226 [2024-07-15 14:05:01.062773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:06.485 [2024-07-15 14:05:01.073337] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f1868 00:26:06.485 [2024-07-15 14:05:01.074188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.485 [2024-07-15 14:05:01.074212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:06.485 [2024-07-15 14:05:01.084195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e9e10 00:26:06.485 [2024-07-15 14:05:01.085059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.485 [2024-07-15 14:05:01.085084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:06.485 [2024-07-15 14:05:01.095038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e5220 00:26:06.485 [2024-07-15 14:05:01.095891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.485 [2024-07-15 14:05:01.095916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:06.485 [2024-07-15 14:05:01.106277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190de470 00:26:06.485 [2024-07-15 14:05:01.107267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.485 [2024-07-15 14:05:01.107291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:06.485 [2024-07-15 14:05:01.117285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e5a90 00:26:06.485 [2024-07-15 14:05:01.118312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.485 [2024-07-15 14:05:01.118336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:06.485 [2024-07-15 14:05:01.128124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190eb760 00:26:06.485 [2024-07-15 14:05:01.129107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.485 [2024-07-15 14:05:01.129134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:06.485 [2024-07-15 14:05:01.138971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f6890 00:26:06.485 [2024-07-15 14:05:01.139987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.485 [2024-07-15 14:05:01.140027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:06.485 [2024-07-15 14:05:01.149809] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190de038 00:26:06.485 [2024-07-15 14:05:01.150792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.485 [2024-07-15 14:05:01.150817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:06.485 [2024-07-15 14:05:01.160600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e4578 00:26:06.485 [2024-07-15 14:05:01.161624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.485 [2024-07-15 14:05:01.161648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:06.485 [2024-07-15 14:05:01.171485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f9f68 00:26:06.485 [2024-07-15 14:05:01.172488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.485 [2024-07-15 14:05:01.172512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:06.485 [2024-07-15 14:05:01.182346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e3498 00:26:06.485 [2024-07-15 14:05:01.183385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.485 [2024-07-15 14:05:01.183410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:06.485 [2024-07-15 14:05:01.193187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e0a68 00:26:06.485 [2024-07-15 14:05:01.194209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.485 [2024-07-15 14:05:01.194233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:06.485 [2024-07-15 14:05:01.204042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190ee190 00:26:06.485 [2024-07-15 14:05:01.205048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.485 [2024-07-15 14:05:01.205077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:06.485 [2024-07-15 14:05:01.215111] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e73e0 00:26:06.485 [2024-07-15 14:05:01.216087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.485 [2024-07-15 14:05:01.216111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:06.485 [2024-07-15 14:05:01.225361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fcdd0 00:26:06.485 [2024-07-15 14:05:01.226287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.485 [2024-07-15 14:05:01.226312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:06.485 [2024-07-15 14:05:01.236671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190ff3c8 00:26:06.485 [2024-07-15 14:05:01.237754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.485 [2024-07-15 14:05:01.237780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:06.485 [2024-07-15 14:05:01.248888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e27f0 00:26:06.485 [2024-07-15 14:05:01.250148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.485 [2024-07-15 14:05:01.250172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:06.485 [2024-07-15 14:05:01.261030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f5be8 00:26:06.485 [2024-07-15 14:05:01.262591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.485 [2024-07-15 14:05:01.262619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:06.485 [2024-07-15 14:05:01.271884] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e6738 00:26:06.485 [2024-07-15 14:05:01.273294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.485 [2024-07-15 14:05:01.273319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:06.485 [2024-07-15 14:05:01.283372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190dece0 00:26:06.485 [2024-07-15 14:05:01.284895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.485 [2024-07-15 14:05:01.284922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:06.485 [2024-07-15 14:05:01.293461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f7da8 00:26:06.485 [2024-07-15 14:05:01.294560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.485 [2024-07-15 14:05:01.294585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:06.485 [2024-07-15 14:05:01.304447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fef90 00:26:06.485 [2024-07-15 14:05:01.305399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.485 [2024-07-15 14:05:01.305425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:06.485 [2024-07-15 14:05:01.315426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190ea680 00:26:06.485 [2024-07-15 14:05:01.316668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.485 [2024-07-15 14:05:01.316692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:06.745 [2024-07-15 14:05:01.327098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190feb58 00:26:06.745 [2024-07-15 14:05:01.328606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.745 [2024-07-15 14:05:01.328633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:06.745 [2024-07-15 14:05:01.337929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f96f8 00:26:06.745 [2024-07-15 14:05:01.339324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.745 [2024-07-15 14:05:01.339348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:06.745 [2024-07-15 14:05:01.349495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e3060 00:26:06.745 [2024-07-15 14:05:01.350994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.745 [2024-07-15 14:05:01.351038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:06.745 [2024-07-15 14:05:01.359571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190dfdc0 00:26:06.745 [2024-07-15 14:05:01.360664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.745 [2024-07-15 14:05:01.360690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:06.745 [2024-07-15 14:05:01.370471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f9b30 00:26:06.745 [2024-07-15 14:05:01.371423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.745 [2024-07-15 14:05:01.371448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:06.745 [2024-07-15 14:05:01.380516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190ea680 00:26:06.745 [2024-07-15 14:05:01.381548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.745 [2024-07-15 14:05:01.381572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:06.745 [2024-07-15 14:05:01.391374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f96f8 00:26:06.745 [2024-07-15 14:05:01.392359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.745 [2024-07-15 14:05:01.392384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:06.745 [2024-07-15 14:05:01.401639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190dfdc0 00:26:06.745 [2024-07-15 14:05:01.402614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.745 [2024-07-15 14:05:01.402639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:06.745 [2024-07-15 14:05:01.413685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fda78 00:26:06.745 [2024-07-15 14:05:01.414812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.745 [2024-07-15 14:05:01.414837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:06.745 [2024-07-15 14:05:01.423855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190eaef0 00:26:06.745 [2024-07-15 14:05:01.424963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.745 [2024-07-15 14:05:01.424988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:06.745 [2024-07-15 14:05:01.434952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e3d08 00:26:06.745 [2024-07-15 14:05:01.436074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.745 [2024-07-15 14:05:01.436098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:06.745 [2024-07-15 14:05:01.446242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e3060 00:26:06.745 [2024-07-15 14:05:01.447359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.745 [2024-07-15 14:05:01.447385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:06.745 [2024-07-15 14:05:01.456504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fdeb0 00:26:06.745 [2024-07-15 14:05:01.457609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.745 [2024-07-15 14:05:01.457633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:06.745 [2024-07-15 14:05:01.466900] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fda78 00:26:06.745 [2024-07-15 14:05:01.467680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.745 [2024-07-15 14:05:01.467704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:06.745 [2024-07-15 14:05:01.477050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190ef6a8 00:26:06.745 [2024-07-15 14:05:01.477817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.745 [2024-07-15 14:05:01.477850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:06.745 [2024-07-15 14:05:01.490365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190dfdc0 00:26:06.745 [2024-07-15 14:05:01.491594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.745 [2024-07-15 14:05:01.491623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:06.745 [2024-07-15 14:05:01.500716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fb048 00:26:06.745 [2024-07-15 14:05:01.501949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.745 [2024-07-15 14:05:01.501974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:06.746 [2024-07-15 14:05:01.511818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f1ca0 00:26:06.746 [2024-07-15 14:05:01.513163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.746 [2024-07-15 14:05:01.513190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:06.746 [2024-07-15 14:05:01.523523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190eaab8 00:26:06.746 [2024-07-15 14:05:01.524770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.746 [2024-07-15 14:05:01.524796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:06.746 [2024-07-15 14:05:01.534886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190efae0 00:26:06.746 [2024-07-15 14:05:01.536141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.746 [2024-07-15 14:05:01.536166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:06.746 [2024-07-15 14:05:01.544891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190ec840 00:26:06.746 [2024-07-15 14:05:01.546661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.746 [2024-07-15 14:05:01.546685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:06.746 [2024-07-15 14:05:01.555218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e23b8 00:26:06.746 [2024-07-15 14:05:01.556056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.746 [2024-07-15 14:05:01.556080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:06.746 [2024-07-15 14:05:01.566092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fc128 00:26:06.746 [2024-07-15 14:05:01.566855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.746 [2024-07-15 14:05:01.566880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:06.746 [2024-07-15 14:05:01.577324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f46d0 00:26:06.746 [2024-07-15 14:05:01.578261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.746 [2024-07-15 14:05:01.578286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:07.006 [2024-07-15 14:05:01.588956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e1b48 00:26:07.006 [2024-07-15 14:05:01.589939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.006 [2024-07-15 14:05:01.589965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:07.006 [2024-07-15 14:05:01.599994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190ea680 00:26:07.006 [2024-07-15 14:05:01.600857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.006 [2024-07-15 14:05:01.600884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:07.006 [2024-07-15 14:05:01.611411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e5ec8 00:26:07.006 [2024-07-15 14:05:01.612495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.006 [2024-07-15 14:05:01.612520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:07.006 [2024-07-15 14:05:01.622655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f0bc0 00:26:07.006 [2024-07-15 14:05:01.623764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.006 [2024-07-15 14:05:01.623797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:07.006 [2024-07-15 14:05:01.634124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fb480 00:26:07.006 [2024-07-15 14:05:01.635331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.006 [2024-07-15 14:05:01.635364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:07.006 [2024-07-15 14:05:01.644413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190edd58 00:26:07.006 [2024-07-15 14:05:01.645629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.006 [2024-07-15 14:05:01.645654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:07.006 [2024-07-15 14:05:01.655586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f7da8 00:26:07.006 [2024-07-15 14:05:01.656800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.006 [2024-07-15 14:05:01.656836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:07.006 [2024-07-15 14:05:01.666857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f3e60 00:26:07.006 [2024-07-15 14:05:01.668081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.006 [2024-07-15 14:05:01.668106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:07.006 [2024-07-15 14:05:01.677202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f57b0 00:26:07.006 [2024-07-15 14:05:01.678394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.006 [2024-07-15 14:05:01.678420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:07.006 [2024-07-15 14:05:01.687408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fa3a0 00:26:07.006 [2024-07-15 14:05:01.688198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.006 [2024-07-15 14:05:01.688223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:07.006 [2024-07-15 14:05:01.698132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190ea248 00:26:07.006 [2024-07-15 14:05:01.698927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.006 [2024-07-15 14:05:01.698952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:07.006 [2024-07-15 14:05:01.709305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190eea00 00:26:07.006 [2024-07-15 14:05:01.709963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.006 [2024-07-15 14:05:01.709988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:07.006 [2024-07-15 14:05:01.720411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190eee38 00:26:07.006 [2024-07-15 14:05:01.721346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.006 [2024-07-15 14:05:01.721370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:07.006 [2024-07-15 14:05:01.731236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fef90 00:26:07.006 [2024-07-15 14:05:01.732183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.006 [2024-07-15 14:05:01.732208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:07.006 [2024-07-15 14:05:01.742128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f2510 00:26:07.006 [2024-07-15 14:05:01.743069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.006 [2024-07-15 14:05:01.743094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:07.006 [2024-07-15 14:05:01.753306] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f6020 00:26:07.006 [2024-07-15 14:05:01.754107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.006 [2024-07-15 14:05:01.754131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:07.006 [2024-07-15 14:05:01.764395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190ed0b0 00:26:07.006 [2024-07-15 14:05:01.765601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.006 [2024-07-15 14:05:01.765627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:07.006 [2024-07-15 14:05:01.776000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190df988 00:26:07.006 [2024-07-15 14:05:01.777098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.006 [2024-07-15 14:05:01.777123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:07.006 [2024-07-15 14:05:01.787056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f7538 00:26:07.006 [2024-07-15 14:05:01.788141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.006 [2024-07-15 14:05:01.788167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:07.006 [2024-07-15 14:05:01.798328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e3060 00:26:07.006 [2024-07-15 14:05:01.799277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.006 [2024-07-15 14:05:01.799302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:07.006 [2024-07-15 14:05:01.809412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e5a90 00:26:07.006 [2024-07-15 14:05:01.810629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.006 [2024-07-15 14:05:01.810654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:07.006 [2024-07-15 14:05:01.820265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f3a28 00:26:07.006 [2024-07-15 14:05:01.821470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.006 [2024-07-15 14:05:01.821494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:07.007 [2024-07-15 14:05:01.831116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f6cc8 00:26:07.007 [2024-07-15 14:05:01.832319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.007 [2024-07-15 14:05:01.832344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:07.007 [2024-07-15 14:05:01.842064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f4f40 00:26:07.007 [2024-07-15 14:05:01.843342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.007 [2024-07-15 14:05:01.843368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:07.265 [2024-07-15 14:05:01.853435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fb480 00:26:07.265 [2024-07-15 14:05:01.854670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.265 [2024-07-15 14:05:01.854695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:07.265 [2024-07-15 14:05:01.865690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f9b30 00:26:07.265 [2024-07-15 14:05:01.867476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.265 [2024-07-15 14:05:01.867500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:07.265 [2024-07-15 14:05:01.873444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f57b0 00:26:07.265 [2024-07-15 14:05:01.874245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.265 [2024-07-15 14:05:01.874274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:07.265 [2024-07-15 14:05:01.884454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e0ea0 00:26:07.265 [2024-07-15 14:05:01.885275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.265 [2024-07-15 14:05:01.885300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.265 [2024-07-15 14:05:01.896688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e99d8 00:26:07.265 [2024-07-15 14:05:01.897958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.265 [2024-07-15 14:05:01.897983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.266 [2024-07-15 14:05:01.907994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190ecc78 00:26:07.266 [2024-07-15 14:05:01.909404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.266 [2024-07-15 14:05:01.909429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:07.266 [2024-07-15 14:05:01.919586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e95a0 00:26:07.266 [2024-07-15 14:05:01.921199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.266 [2024-07-15 14:05:01.921223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.266 [2024-07-15 14:05:01.930046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f0350 00:26:07.266 [2024-07-15 14:05:01.931273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.266 [2024-07-15 14:05:01.931305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:07.266 [2024-07-15 14:05:01.940215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e4140 00:26:07.266 [2024-07-15 14:05:01.941952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.266 [2024-07-15 14:05:01.941979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:07.266 [2024-07-15 14:05:01.951956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190edd58 00:26:07.266 [2024-07-15 14:05:01.953836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.266 [2024-07-15 14:05:01.953862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.266 [2024-07-15 14:05:01.961618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f6458 00:26:07.266 [2024-07-15 14:05:01.962487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.266 [2024-07-15 14:05:01.962511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:07.266 [2024-07-15 14:05:01.972926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f2948 00:26:07.266 [2024-07-15 14:05:01.973909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.266 [2024-07-15 14:05:01.973934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.266 [2024-07-15 14:05:01.984191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190eee38 00:26:07.266 [2024-07-15 14:05:01.985331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.266 [2024-07-15 14:05:01.985356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:07.266 [2024-07-15 14:05:01.995469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e12d8 00:26:07.266 [2024-07-15 14:05:01.996803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.266 [2024-07-15 14:05:01.996840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.266 [2024-07-15 14:05:02.005662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f96f8 00:26:07.266 [2024-07-15 14:05:02.006631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.266 [2024-07-15 14:05:02.006655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:07.266 [2024-07-15 14:05:02.016995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fdeb0 00:26:07.266 [2024-07-15 14:05:02.017860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.266 [2024-07-15 14:05:02.017887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.266 [2024-07-15 14:05:02.030300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e5a90 00:26:07.266 [2024-07-15 14:05:02.031950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.266 [2024-07-15 14:05:02.031981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.266 [2024-07-15 14:05:02.041922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e84c0 00:26:07.266 [2024-07-15 14:05:02.043627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.266 [2024-07-15 14:05:02.043652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:07.266 [2024-07-15 14:05:02.049613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fef90 00:26:07.266 [2024-07-15 14:05:02.050347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.266 [2024-07-15 14:05:02.050372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:07.266 [2024-07-15 14:05:02.061906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e6300 00:26:07.266 [2024-07-15 14:05:02.062849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.266 [2024-07-15 14:05:02.062875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:07.266 [2024-07-15 14:05:02.073288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190fef90 00:26:07.266 [2024-07-15 14:05:02.074292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.266 [2024-07-15 14:05:02.074317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.266 [2024-07-15 14:05:02.083485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f0ff8 00:26:07.266 [2024-07-15 14:05:02.085184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.266 [2024-07-15 14:05:02.085210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.266 [2024-07-15 14:05:02.093476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f57b0 00:26:07.266 [2024-07-15 14:05:02.094414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.266 [2024-07-15 14:05:02.094438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:07.266 [2024-07-15 14:05:02.105052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190f1430 00:26:07.524 [2024-07-15 14:05:02.106203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.524 [2024-07-15 14:05:02.106233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:07.524 [2024-07-15 14:05:02.115699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e8d30 00:26:07.524 [2024-07-15 14:05:02.116767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.524 [2024-07-15 14:05:02.116792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.524 [2024-07-15 14:05:02.127092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190df550 00:26:07.524 [2024-07-15 14:05:02.128286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.524 [2024-07-15 14:05:02.128312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:07.524 [2024-07-15 14:05:02.138506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e8088 00:26:07.524 [2024-07-15 14:05:02.139842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.524 [2024-07-15 14:05:02.139868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.524 [2024-07-15 14:05:02.148589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e4140 00:26:07.524 [2024-07-15 14:05:02.149524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.524 [2024-07-15 14:05:02.149548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:07.524 [2024-07-15 14:05:02.159571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e1710 00:26:07.524 [2024-07-15 14:05:02.160377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.524 [2024-07-15 14:05:02.160411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.524 [2024-07-15 14:05:02.170570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190edd58 00:26:07.524 [2024-07-15 14:05:02.171586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.524 [2024-07-15 14:05:02.171611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.524 [2024-07-15 14:05:02.181385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe210d0) with pdu=0x2000190e12d8 00:26:07.524 [2024-07-15 14:05:02.182393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.524 [2024-07-15 14:05:02.182417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.524 00:26:07.524 Latency(us) 00:26:07.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.524 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:07.524 nvme0n1 : 2.00 23268.15 90.89 0.00 0.00 5495.27 2220.94 13981.01 00:26:07.524 =================================================================================================================== 00:26:07.524 Total : 23268.15 90.89 0.00 0.00 5495.27 2220.94 13981.01 00:26:07.524 0 00:26:07.524 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:07.524 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:07.524 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:07.524 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:07.524 | .driver_specific 00:26:07.524 | .nvme_error 00:26:07.524 | .status_code 00:26:07.524 | .command_transient_transport_error' 00:26:07.781 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 182 > 0 )) 00:26:07.781 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3854663 00:26:07.781 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3854663 ']' 00:26:07.781 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3854663 00:26:07.781 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:07.781 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:07.781 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3854663 00:26:07.781 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:07.781 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:07.782 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3854663' 00:26:07.782 killing process with pid 3854663 00:26:07.782 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3854663 00:26:07.782 Received shutdown signal, test time was about 2.000000 seconds 00:26:07.782 00:26:07.782 Latency(us) 00:26:07.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.782 =================================================================================================================== 00:26:07.782 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:07.782 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3854663 00:26:08.040 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:08.040 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:08.040 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:08.040 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:08.040 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:08.040 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3855168 00:26:08.040 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:08.040 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3855168 /var/tmp/bperf.sock 00:26:08.040 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3855168 ']' 00:26:08.040 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:08.040 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:08.040 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:08.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:08.040 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:08.040 14:05:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:08.040 [2024-07-15 14:05:02.761578] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:26:08.040 [2024-07-15 14:05:02.761657] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3855168 ] 00:26:08.040 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:08.040 Zero copy mechanism will not be used. 00:26:08.040 EAL: No free 2048 kB hugepages reported on node 1 00:26:08.040 [2024-07-15 14:05:02.821073] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.298 [2024-07-15 14:05:02.936674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.298 14:05:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:08.298 14:05:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:08.298 14:05:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:08.298 14:05:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:08.555 14:05:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:08.555 14:05:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.555 14:05:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:08.555 14:05:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.555 14:05:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:08.555 14:05:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:08.812 nvme0n1 00:26:08.812 14:05:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:08.812 14:05:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.812 14:05:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:09.072 14:05:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.072 14:05:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:09.072 14:05:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:09.072 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:09.072 Zero copy mechanism will not be used. 00:26:09.072 Running I/O for 2 seconds... 00:26:09.072 [2024-07-15 14:05:03.769903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.072 [2024-07-15 14:05:03.770233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 14:05:03.770268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 14:05:03.776421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.072 [2024-07-15 14:05:03.776711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 14:05:03.776761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 14:05:03.784252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.072 [2024-07-15 14:05:03.784552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 14:05:03.784591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 14:05:03.790459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.072 [2024-07-15 14:05:03.790778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 14:05:03.790807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 14:05:03.796806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.072 [2024-07-15 14:05:03.797107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 14:05:03.797135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 14:05:03.803077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.072 [2024-07-15 14:05:03.803464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 14:05:03.803505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 14:05:03.810015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.072 [2024-07-15 14:05:03.810352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 14:05:03.810379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 14:05:03.816672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.072 [2024-07-15 14:05:03.816999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 14:05:03.817033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 14:05:03.824360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.072 [2024-07-15 14:05:03.824673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 14:05:03.824702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 14:05:03.830186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.072 [2024-07-15 14:05:03.830467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 14:05:03.830494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 14:05:03.836398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.072 [2024-07-15 14:05:03.836707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 14:05:03.836735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 14:05:03.842291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.072 [2024-07-15 14:05:03.842584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 14:05:03.842611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 14:05:03.848899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.072 [2024-07-15 14:05:03.849222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 14:05:03.849249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 14:05:03.855948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.072 [2024-07-15 14:05:03.856286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 14:05:03.856313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 14:05:03.862619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.072 [2024-07-15 14:05:03.862841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 14:05:03.862879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 14:05:03.870089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.072 [2024-07-15 14:05:03.870409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 14:05:03.870444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 14:05:03.877568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.072 [2024-07-15 14:05:03.877905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 14:05:03.877933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 14:05:03.885961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.072 [2024-07-15 14:05:03.886274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 14:05:03.886303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 14:05:03.894176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.072 [2024-07-15 14:05:03.894557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 14:05:03.894584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 14:05:03.901161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.072 [2024-07-15 14:05:03.901471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 14:05:03.901497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 14:05:03.907733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.072 [2024-07-15 14:05:03.908072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 14:05:03.908098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.332 [2024-07-15 14:05:03.914988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.332 [2024-07-15 14:05:03.915332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.332 [2024-07-15 14:05:03.915360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 14:05:03.921816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.333 [2024-07-15 14:05:03.922145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 14:05:03.922171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 14:05:03.928784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.333 [2024-07-15 14:05:03.929113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 14:05:03.929140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 14:05:03.935466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.333 [2024-07-15 14:05:03.935803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 14:05:03.935832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 14:05:03.942533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.333 [2024-07-15 14:05:03.942862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 14:05:03.942890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 14:05:03.949545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.333 [2024-07-15 14:05:03.949864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 14:05:03.949891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 14:05:03.956466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.333 [2024-07-15 14:05:03.956791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 14:05:03.956818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 14:05:03.963280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.333 [2024-07-15 14:05:03.963682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 14:05:03.963707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 14:05:03.970219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.333 [2024-07-15 14:05:03.970601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 14:05:03.970642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 14:05:03.977521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.333 [2024-07-15 14:05:03.977851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 14:05:03.977879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 14:05:03.984697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.333 [2024-07-15 14:05:03.985022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 14:05:03.985065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 14:05:03.991888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.333 [2024-07-15 14:05:03.992211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 14:05:03.992238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 14:05:03.999206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.333 [2024-07-15 14:05:03.999525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 14:05:03.999560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 14:05:04.006272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.333 [2024-07-15 14:05:04.006579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 14:05:04.006607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 14:05:04.013919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.333 [2024-07-15 14:05:04.014250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 14:05:04.014277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 14:05:04.023506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.333 [2024-07-15 14:05:04.023855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 14:05:04.023884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 14:05:04.033631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.333 [2024-07-15 14:05:04.033978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 14:05:04.034015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 14:05:04.043603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.333 [2024-07-15 14:05:04.043937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 14:05:04.043966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 14:05:04.053363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.333 [2024-07-15 14:05:04.053683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 14:05:04.053710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 14:05:04.063379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.333 [2024-07-15 14:05:04.063687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 14:05:04.063713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 14:05:04.072863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.333 [2024-07-15 14:05:04.073203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 14:05:04.073238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 14:05:04.083081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.333 [2024-07-15 14:05:04.083390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 14:05:04.083418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 14:05:04.093327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.333 [2024-07-15 14:05:04.093649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 14:05:04.093684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 14:05:04.103439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.333 [2024-07-15 14:05:04.103772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 14:05:04.103800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 14:05:04.113076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.333 [2024-07-15 14:05:04.113468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 14:05:04.113508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 14:05:04.122878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.333 [2024-07-15 14:05:04.123214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 14:05:04.123240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 14:05:04.133850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.333 [2024-07-15 14:05:04.134074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 14:05:04.134101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 14:05:04.143602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.333 [2024-07-15 14:05:04.143938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 14:05:04.143966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 14:05:04.152036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.333 [2024-07-15 14:05:04.152144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 14:05:04.152169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 14:05:04.160849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.333 [2024-07-15 14:05:04.161188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 14:05:04.161215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.334 [2024-07-15 14:05:04.169781] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.334 [2024-07-15 14:05:04.170142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 14:05:04.170170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.593 [2024-07-15 14:05:04.178154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.593 [2024-07-15 14:05:04.178467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.593 [2024-07-15 14:05:04.178494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.593 [2024-07-15 14:05:04.185754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.593 [2024-07-15 14:05:04.186114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.593 [2024-07-15 14:05:04.186142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.593 [2024-07-15 14:05:04.193750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.593 [2024-07-15 14:05:04.194066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.593 [2024-07-15 14:05:04.194093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.593 [2024-07-15 14:05:04.201481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.593 [2024-07-15 14:05:04.201833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.593 [2024-07-15 14:05:04.201861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.593 [2024-07-15 14:05:04.208792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.593 [2024-07-15 14:05:04.209109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.593 [2024-07-15 14:05:04.209136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.593 [2024-07-15 14:05:04.216609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.593 [2024-07-15 14:05:04.216930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.593 [2024-07-15 14:05:04.216958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.593 [2024-07-15 14:05:04.225225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.593 [2024-07-15 14:05:04.225560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.593 [2024-07-15 14:05:04.225596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.593 [2024-07-15 14:05:04.233589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.593 [2024-07-15 14:05:04.233936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.593 [2024-07-15 14:05:04.233966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.593 [2024-07-15 14:05:04.241472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.593 [2024-07-15 14:05:04.241804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.593 [2024-07-15 14:05:04.241833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.593 [2024-07-15 14:05:04.249686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.593 [2024-07-15 14:05:04.250045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.593 [2024-07-15 14:05:04.250073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.593 [2024-07-15 14:05:04.257872] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.593 [2024-07-15 14:05:04.258189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.593 [2024-07-15 14:05:04.258216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.593 [2024-07-15 14:05:04.265491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.593 [2024-07-15 14:05:04.265811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.593 [2024-07-15 14:05:04.265839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.593 [2024-07-15 14:05:04.273083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.594 [2024-07-15 14:05:04.273388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 14:05:04.273415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 14:05:04.280050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.594 [2024-07-15 14:05:04.280368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 14:05:04.280396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 14:05:04.286363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.594 [2024-07-15 14:05:04.286656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 14:05:04.286684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 14:05:04.292215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.594 [2024-07-15 14:05:04.292507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 14:05:04.292541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 14:05:04.298297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.594 [2024-07-15 14:05:04.298589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 14:05:04.298615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 14:05:04.304948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.594 [2024-07-15 14:05:04.305278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 14:05:04.305305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 14:05:04.311312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.594 [2024-07-15 14:05:04.311624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 14:05:04.311651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 14:05:04.317945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.594 [2024-07-15 14:05:04.318340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 14:05:04.318394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 14:05:04.324520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.594 [2024-07-15 14:05:04.324859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 14:05:04.324886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 14:05:04.331351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.594 [2024-07-15 14:05:04.331656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 14:05:04.331683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 14:05:04.337477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.594 [2024-07-15 14:05:04.337804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 14:05:04.337832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 14:05:04.343681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.594 [2024-07-15 14:05:04.344043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 14:05:04.344071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 14:05:04.349878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.594 [2024-07-15 14:05:04.350200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 14:05:04.350226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 14:05:04.356097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.594 [2024-07-15 14:05:04.356397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 14:05:04.356424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 14:05:04.362842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.594 [2024-07-15 14:05:04.363159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 14:05:04.363186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 14:05:04.370513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.594 [2024-07-15 14:05:04.370840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 14:05:04.370868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 14:05:04.377470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.594 [2024-07-15 14:05:04.377793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 14:05:04.377820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 14:05:04.384052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.594 [2024-07-15 14:05:04.384351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 14:05:04.384378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 14:05:04.390647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.594 [2024-07-15 14:05:04.390983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 14:05:04.391011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 14:05:04.398057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.594 [2024-07-15 14:05:04.398358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 14:05:04.398385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 14:05:04.405052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.594 [2024-07-15 14:05:04.405423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 14:05:04.405449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 14:05:04.412674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.594 [2024-07-15 14:05:04.413004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 14:05:04.413031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 14:05:04.420466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.594 [2024-07-15 14:05:04.420791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 14:05:04.420818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 14:05:04.427567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.594 [2024-07-15 14:05:04.427898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 14:05:04.427925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.853 [2024-07-15 14:05:04.434556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.853 [2024-07-15 14:05:04.434912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.853 [2024-07-15 14:05:04.434940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.853 [2024-07-15 14:05:04.441215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.853 [2024-07-15 14:05:04.441522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.853 [2024-07-15 14:05:04.441549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.853 [2024-07-15 14:05:04.447636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.853 [2024-07-15 14:05:04.448028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.853 [2024-07-15 14:05:04.448069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.853 [2024-07-15 14:05:04.454238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.853 [2024-07-15 14:05:04.454538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.853 [2024-07-15 14:05:04.454568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.853 [2024-07-15 14:05:04.460637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.853 [2024-07-15 14:05:04.460964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.853 [2024-07-15 14:05:04.460993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.853 [2024-07-15 14:05:04.467198] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.853 [2024-07-15 14:05:04.467499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.853 [2024-07-15 14:05:04.467535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.853 [2024-07-15 14:05:04.474172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.853 [2024-07-15 14:05:04.474471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.853 [2024-07-15 14:05:04.474497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.853 [2024-07-15 14:05:04.481793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.853 [2024-07-15 14:05:04.482146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.853 [2024-07-15 14:05:04.482174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.853 [2024-07-15 14:05:04.488730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.853 [2024-07-15 14:05:04.489054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.853 [2024-07-15 14:05:04.489096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.853 [2024-07-15 14:05:04.495653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.853 [2024-07-15 14:05:04.495980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.853 [2024-07-15 14:05:04.496007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.853 [2024-07-15 14:05:04.502489] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.853 [2024-07-15 14:05:04.502830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.853 [2024-07-15 14:05:04.502857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.853 [2024-07-15 14:05:04.509010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.853 [2024-07-15 14:05:04.509318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.853 [2024-07-15 14:05:04.509344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.853 [2024-07-15 14:05:04.515402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.853 [2024-07-15 14:05:04.515821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.853 [2024-07-15 14:05:04.515858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.853 [2024-07-15 14:05:04.522149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.853 [2024-07-15 14:05:04.522452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.853 [2024-07-15 14:05:04.522479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.853 [2024-07-15 14:05:04.528753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.853 [2024-07-15 14:05:04.529085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.853 [2024-07-15 14:05:04.529111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.853 [2024-07-15 14:05:04.536474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.853 [2024-07-15 14:05:04.536817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.853 [2024-07-15 14:05:04.536847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.853 [2024-07-15 14:05:04.544641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.853 [2024-07-15 14:05:04.544976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.853 [2024-07-15 14:05:04.545003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.853 [2024-07-15 14:05:04.552306] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.854 [2024-07-15 14:05:04.552606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.854 [2024-07-15 14:05:04.552633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.854 [2024-07-15 14:05:04.559796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.854 [2024-07-15 14:05:04.560122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.854 [2024-07-15 14:05:04.560148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.854 [2024-07-15 14:05:04.566711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.854 [2024-07-15 14:05:04.567062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.854 [2024-07-15 14:05:04.567103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.854 [2024-07-15 14:05:04.573293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.854 [2024-07-15 14:05:04.573599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.854 [2024-07-15 14:05:04.573625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.854 [2024-07-15 14:05:04.579501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.854 [2024-07-15 14:05:04.579824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.854 [2024-07-15 14:05:04.579852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.854 [2024-07-15 14:05:04.585922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.854 [2024-07-15 14:05:04.586306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.854 [2024-07-15 14:05:04.586333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.854 [2024-07-15 14:05:04.592273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.854 [2024-07-15 14:05:04.592575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.854 [2024-07-15 14:05:04.592601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.854 [2024-07-15 14:05:04.598417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.854 [2024-07-15 14:05:04.598724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.854 [2024-07-15 14:05:04.598772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.854 [2024-07-15 14:05:04.604651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.854 [2024-07-15 14:05:04.604973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.854 [2024-07-15 14:05:04.604999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.854 [2024-07-15 14:05:04.610817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.854 [2024-07-15 14:05:04.611138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.854 [2024-07-15 14:05:04.611164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.854 [2024-07-15 14:05:04.617143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.854 [2024-07-15 14:05:04.617444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.854 [2024-07-15 14:05:04.617469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.854 [2024-07-15 14:05:04.624991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.854 [2024-07-15 14:05:04.625325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.854 [2024-07-15 14:05:04.625352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.854 [2024-07-15 14:05:04.633247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.854 [2024-07-15 14:05:04.633540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.854 [2024-07-15 14:05:04.633568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.854 [2024-07-15 14:05:04.640797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.854 [2024-07-15 14:05:04.641136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.854 [2024-07-15 14:05:04.641163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.854 [2024-07-15 14:05:04.648188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.854 [2024-07-15 14:05:04.648485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.854 [2024-07-15 14:05:04.648518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.854 [2024-07-15 14:05:04.655530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.854 [2024-07-15 14:05:04.655877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.854 [2024-07-15 14:05:04.655906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.854 [2024-07-15 14:05:04.663464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.854 [2024-07-15 14:05:04.663779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.854 [2024-07-15 14:05:04.663808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.854 [2024-07-15 14:05:04.671252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.854 [2024-07-15 14:05:04.671609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.854 [2024-07-15 14:05:04.671637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.854 [2024-07-15 14:05:04.678999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.854 [2024-07-15 14:05:04.679316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.854 [2024-07-15 14:05:04.679342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.854 [2024-07-15 14:05:04.686540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:09.854 [2024-07-15 14:05:04.686863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.854 [2024-07-15 14:05:04.686892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.113 [2024-07-15 14:05:04.694329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.113 [2024-07-15 14:05:04.694648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-07-15 14:05:04.694676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.113 [2024-07-15 14:05:04.702273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.113 [2024-07-15 14:05:04.702567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-07-15 14:05:04.702595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.113 [2024-07-15 14:05:04.709561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.113 [2024-07-15 14:05:04.709886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-07-15 14:05:04.709917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.113 [2024-07-15 14:05:04.717018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.113 [2024-07-15 14:05:04.717332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-07-15 14:05:04.717358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.113 [2024-07-15 14:05:04.724549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.113 [2024-07-15 14:05:04.724872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-07-15 14:05:04.724899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.113 [2024-07-15 14:05:04.731186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.113 [2024-07-15 14:05:04.731480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-07-15 14:05:04.731506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.113 [2024-07-15 14:05:04.737160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.113 [2024-07-15 14:05:04.737455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-07-15 14:05:04.737482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.113 [2024-07-15 14:05:04.743490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.113 [2024-07-15 14:05:04.743804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-07-15 14:05:04.743830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.113 [2024-07-15 14:05:04.749484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.113 [2024-07-15 14:05:04.749811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-07-15 14:05:04.749837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.113 [2024-07-15 14:05:04.756050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.113 [2024-07-15 14:05:04.756371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.113 [2024-07-15 14:05:04.756397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.113 [2024-07-15 14:05:04.763310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.114 [2024-07-15 14:05:04.763606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-07-15 14:05:04.763632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.114 [2024-07-15 14:05:04.771495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.114 [2024-07-15 14:05:04.771832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-07-15 14:05:04.771858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.114 [2024-07-15 14:05:04.780253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.114 [2024-07-15 14:05:04.780557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-07-15 14:05:04.780583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.114 [2024-07-15 14:05:04.788912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.114 [2024-07-15 14:05:04.789233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-07-15 14:05:04.789260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.114 [2024-07-15 14:05:04.795712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.114 [2024-07-15 14:05:04.796044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-07-15 14:05:04.796072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.114 [2024-07-15 14:05:04.801525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.114 [2024-07-15 14:05:04.801840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-07-15 14:05:04.801867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.114 [2024-07-15 14:05:04.807119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.114 [2024-07-15 14:05:04.807402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-07-15 14:05:04.807429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.114 [2024-07-15 14:05:04.812986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.114 [2024-07-15 14:05:04.813286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-07-15 14:05:04.813311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.114 [2024-07-15 14:05:04.820151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.114 [2024-07-15 14:05:04.820440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-07-15 14:05:04.820466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.114 [2024-07-15 14:05:04.826956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.114 [2024-07-15 14:05:04.827299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-07-15 14:05:04.827325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.114 [2024-07-15 14:05:04.832987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.114 [2024-07-15 14:05:04.833103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-07-15 14:05:04.833134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.114 [2024-07-15 14:05:04.839315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.114 [2024-07-15 14:05:04.839680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-07-15 14:05:04.839706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.114 [2024-07-15 14:05:04.846189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.114 [2024-07-15 14:05:04.846493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-07-15 14:05:04.846519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.114 [2024-07-15 14:05:04.852355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.114 [2024-07-15 14:05:04.852658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-07-15 14:05:04.852684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.114 [2024-07-15 14:05:04.859003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.114 [2024-07-15 14:05:04.859281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-07-15 14:05:04.859306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.114 [2024-07-15 14:05:04.865142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.114 [2024-07-15 14:05:04.865430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-07-15 14:05:04.865455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.114 [2024-07-15 14:05:04.871352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.114 [2024-07-15 14:05:04.871647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-07-15 14:05:04.871673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.114 [2024-07-15 14:05:04.877689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.114 [2024-07-15 14:05:04.878006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-07-15 14:05:04.878032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.114 [2024-07-15 14:05:04.884029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.114 [2024-07-15 14:05:04.884322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-07-15 14:05:04.884349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.114 [2024-07-15 14:05:04.890314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.114 [2024-07-15 14:05:04.890604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-07-15 14:05:04.890630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.114 [2024-07-15 14:05:04.896582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.114 [2024-07-15 14:05:04.896902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-07-15 14:05:04.896929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.114 [2024-07-15 14:05:04.903014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.114 [2024-07-15 14:05:04.903306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-07-15 14:05:04.903331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.114 [2024-07-15 14:05:04.909262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.114 [2024-07-15 14:05:04.909553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-07-15 14:05:04.909579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.114 [2024-07-15 14:05:04.915705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.114 [2024-07-15 14:05:04.916012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-07-15 14:05:04.916054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.114 [2024-07-15 14:05:04.923164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.114 [2024-07-15 14:05:04.923468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-07-15 14:05:04.923493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.114 [2024-07-15 14:05:04.930497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.114 [2024-07-15 14:05:04.930814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-07-15 14:05:04.930840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.114 [2024-07-15 14:05:04.937105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.114 [2024-07-15 14:05:04.937399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-07-15 14:05:04.937426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.114 [2024-07-15 14:05:04.943439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.114 [2024-07-15 14:05:04.943756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-07-15 14:05:04.943788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.114 [2024-07-15 14:05:04.950207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.114 [2024-07-15 14:05:04.950528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.114 [2024-07-15 14:05:04.950554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.374 [2024-07-15 14:05:04.956959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.374 [2024-07-15 14:05:04.957341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.374 [2024-07-15 14:05:04.957368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.374 [2024-07-15 14:05:04.963761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.374 [2024-07-15 14:05:04.964080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.374 [2024-07-15 14:05:04.964106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.374 [2024-07-15 14:05:04.970453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.374 [2024-07-15 14:05:04.970767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.374 [2024-07-15 14:05:04.970794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.374 [2024-07-15 14:05:04.977827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.374 [2024-07-15 14:05:04.978140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.374 [2024-07-15 14:05:04.978165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.374 [2024-07-15 14:05:04.985240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.374 [2024-07-15 14:05:04.985357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.374 [2024-07-15 14:05:04.985383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.374 [2024-07-15 14:05:04.992799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.374 [2024-07-15 14:05:04.993098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.374 [2024-07-15 14:05:04.993124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.374 [2024-07-15 14:05:05.000340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.374 [2024-07-15 14:05:05.000621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.374 [2024-07-15 14:05:05.000647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.374 [2024-07-15 14:05:05.007090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.374 [2024-07-15 14:05:05.007373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.374 [2024-07-15 14:05:05.007405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.374 [2024-07-15 14:05:05.013181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.374 [2024-07-15 14:05:05.013460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.374 [2024-07-15 14:05:05.013486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.374 [2024-07-15 14:05:05.019508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.374 [2024-07-15 14:05:05.019811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.374 [2024-07-15 14:05:05.019839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.374 [2024-07-15 14:05:05.025702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.374 [2024-07-15 14:05:05.025991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.374 [2024-07-15 14:05:05.026017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.374 [2024-07-15 14:05:05.031601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.374 [2024-07-15 14:05:05.031889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.374 [2024-07-15 14:05:05.031915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.374 [2024-07-15 14:05:05.037832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.374 [2024-07-15 14:05:05.038122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.374 [2024-07-15 14:05:05.038148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.374 [2024-07-15 14:05:05.043992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.374 [2024-07-15 14:05:05.044276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.374 [2024-07-15 14:05:05.044301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.374 [2024-07-15 14:05:05.052021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.374 [2024-07-15 14:05:05.052333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.374 [2024-07-15 14:05:05.052359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.374 [2024-07-15 14:05:05.059012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.374 [2024-07-15 14:05:05.059282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.374 [2024-07-15 14:05:05.059308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.374 [2024-07-15 14:05:05.066102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.374 [2024-07-15 14:05:05.066367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.374 [2024-07-15 14:05:05.066392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.374 [2024-07-15 14:05:05.073137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.374 [2024-07-15 14:05:05.073401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.374 [2024-07-15 14:05:05.073427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.374 [2024-07-15 14:05:05.080669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.374 [2024-07-15 14:05:05.080955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.374 [2024-07-15 14:05:05.080982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.374 [2024-07-15 14:05:05.087339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.374 [2024-07-15 14:05:05.087605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.374 [2024-07-15 14:05:05.087631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.374 [2024-07-15 14:05:05.093364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.374 [2024-07-15 14:05:05.093628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.374 [2024-07-15 14:05:05.093653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.374 [2024-07-15 14:05:05.099349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.374 [2024-07-15 14:05:05.099614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.374 [2024-07-15 14:05:05.099640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.374 [2024-07-15 14:05:05.105059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.374 [2024-07-15 14:05:05.105322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.374 [2024-07-15 14:05:05.105348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.374 [2024-07-15 14:05:05.110907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.374 [2024-07-15 14:05:05.111187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.374 [2024-07-15 14:05:05.111213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.374 [2024-07-15 14:05:05.117244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.374 [2024-07-15 14:05:05.117616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.374 [2024-07-15 14:05:05.117648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.374 [2024-07-15 14:05:05.125379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.374 [2024-07-15 14:05:05.125709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.374 [2024-07-15 14:05:05.125759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.374 [2024-07-15 14:05:05.131979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.374 [2024-07-15 14:05:05.132260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.374 [2024-07-15 14:05:05.132285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.375 [2024-07-15 14:05:05.139213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.375 [2024-07-15 14:05:05.139476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.375 [2024-07-15 14:05:05.139502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.375 [2024-07-15 14:05:05.146865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.375 [2024-07-15 14:05:05.147151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.375 [2024-07-15 14:05:05.147177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.375 [2024-07-15 14:05:05.153176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.375 [2024-07-15 14:05:05.153441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.375 [2024-07-15 14:05:05.153467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.375 [2024-07-15 14:05:05.159094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.375 [2024-07-15 14:05:05.159360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.375 [2024-07-15 14:05:05.159386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.375 [2024-07-15 14:05:05.166455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.375 [2024-07-15 14:05:05.166831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.375 [2024-07-15 14:05:05.166872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.375 [2024-07-15 14:05:05.173255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.375 [2024-07-15 14:05:05.173517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.375 [2024-07-15 14:05:05.173544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.375 [2024-07-15 14:05:05.179550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.375 [2024-07-15 14:05:05.179847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.375 [2024-07-15 14:05:05.179874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.375 [2024-07-15 14:05:05.185461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.375 [2024-07-15 14:05:05.185749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.375 [2024-07-15 14:05:05.185777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.375 [2024-07-15 14:05:05.192559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.375 [2024-07-15 14:05:05.192967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.375 [2024-07-15 14:05:05.192994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.375 [2024-07-15 14:05:05.199562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.375 [2024-07-15 14:05:05.199849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.375 [2024-07-15 14:05:05.199876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.375 [2024-07-15 14:05:05.206272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.375 [2024-07-15 14:05:05.206534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.375 [2024-07-15 14:05:05.206561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.375 [2024-07-15 14:05:05.213059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.375 [2024-07-15 14:05:05.213386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.375 [2024-07-15 14:05:05.213413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.634 [2024-07-15 14:05:05.220436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.634 [2024-07-15 14:05:05.220701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.634 [2024-07-15 14:05:05.220750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.634 [2024-07-15 14:05:05.226606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.634 [2024-07-15 14:05:05.226900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.634 [2024-07-15 14:05:05.226927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.634 [2024-07-15 14:05:05.232799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.634 [2024-07-15 14:05:05.233082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.634 [2024-07-15 14:05:05.233108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.634 [2024-07-15 14:05:05.239383] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.634 [2024-07-15 14:05:05.239650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.634 [2024-07-15 14:05:05.239676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.634 [2024-07-15 14:05:05.245730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.634 [2024-07-15 14:05:05.246022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.634 [2024-07-15 14:05:05.246048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.634 [2024-07-15 14:05:05.251955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.634 [2024-07-15 14:05:05.252236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.634 [2024-07-15 14:05:05.252261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.634 [2024-07-15 14:05:05.258014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.634 [2024-07-15 14:05:05.258282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.634 [2024-07-15 14:05:05.258308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.634 [2024-07-15 14:05:05.264013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.634 [2024-07-15 14:05:05.264277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.634 [2024-07-15 14:05:05.264303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.634 [2024-07-15 14:05:05.269612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.634 [2024-07-15 14:05:05.269907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.634 [2024-07-15 14:05:05.269933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.634 [2024-07-15 14:05:05.275643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.634 [2024-07-15 14:05:05.275933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.634 [2024-07-15 14:05:05.275959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.634 [2024-07-15 14:05:05.282963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.634 [2024-07-15 14:05:05.283242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.634 [2024-07-15 14:05:05.283267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.634 [2024-07-15 14:05:05.289179] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.634 [2024-07-15 14:05:05.289442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.634 [2024-07-15 14:05:05.289472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.634 [2024-07-15 14:05:05.296252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.634 [2024-07-15 14:05:05.296532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.634 [2024-07-15 14:05:05.296560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.634 [2024-07-15 14:05:05.304051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.634 [2024-07-15 14:05:05.304316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.634 [2024-07-15 14:05:05.304341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.634 [2024-07-15 14:05:05.311275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.634 [2024-07-15 14:05:05.311539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.634 [2024-07-15 14:05:05.311565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.634 [2024-07-15 14:05:05.318491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.634 [2024-07-15 14:05:05.318781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.634 [2024-07-15 14:05:05.318807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.634 [2024-07-15 14:05:05.326055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.634 [2024-07-15 14:05:05.326324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.634 [2024-07-15 14:05:05.326349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.634 [2024-07-15 14:05:05.333536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.634 [2024-07-15 14:05:05.333827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.634 [2024-07-15 14:05:05.333855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.634 [2024-07-15 14:05:05.341191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.634 [2024-07-15 14:05:05.341529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.634 [2024-07-15 14:05:05.341555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.634 [2024-07-15 14:05:05.348291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.635 [2024-07-15 14:05:05.348548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.635 [2024-07-15 14:05:05.348574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.635 [2024-07-15 14:05:05.354966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.635 [2024-07-15 14:05:05.355249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.635 [2024-07-15 14:05:05.355275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.635 [2024-07-15 14:05:05.360839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.635 [2024-07-15 14:05:05.361116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.635 [2024-07-15 14:05:05.361142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.635 [2024-07-15 14:05:05.366212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.635 [2024-07-15 14:05:05.366467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.635 [2024-07-15 14:05:05.366493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.635 [2024-07-15 14:05:05.371732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.635 [2024-07-15 14:05:05.372006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.635 [2024-07-15 14:05:05.372047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.635 [2024-07-15 14:05:05.377249] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.635 [2024-07-15 14:05:05.377503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.635 [2024-07-15 14:05:05.377530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.635 [2024-07-15 14:05:05.382852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.635 [2024-07-15 14:05:05.383137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.635 [2024-07-15 14:05:05.383163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.635 [2024-07-15 14:05:05.389044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.635 [2024-07-15 14:05:05.389314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.635 [2024-07-15 14:05:05.389341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.635 [2024-07-15 14:05:05.394986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.635 [2024-07-15 14:05:05.395270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.635 [2024-07-15 14:05:05.395297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.635 [2024-07-15 14:05:05.401489] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.635 [2024-07-15 14:05:05.401784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.635 [2024-07-15 14:05:05.401811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.635 [2024-07-15 14:05:05.408464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.635 [2024-07-15 14:05:05.408763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.635 [2024-07-15 14:05:05.408791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.635 [2024-07-15 14:05:05.414595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.635 [2024-07-15 14:05:05.414924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.635 [2024-07-15 14:05:05.414953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.635 [2024-07-15 14:05:05.420523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.635 [2024-07-15 14:05:05.420810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.635 [2024-07-15 14:05:05.420836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.635 [2024-07-15 14:05:05.426389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.635 [2024-07-15 14:05:05.426653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.635 [2024-07-15 14:05:05.426679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.635 [2024-07-15 14:05:05.432019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.635 [2024-07-15 14:05:05.432317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.635 [2024-07-15 14:05:05.432343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.635 [2024-07-15 14:05:05.437361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.635 [2024-07-15 14:05:05.437618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.635 [2024-07-15 14:05:05.437644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.635 [2024-07-15 14:05:05.442650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.635 [2024-07-15 14:05:05.442932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.635 [2024-07-15 14:05:05.442958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.635 [2024-07-15 14:05:05.447986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.635 [2024-07-15 14:05:05.448276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.635 [2024-07-15 14:05:05.448301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.635 [2024-07-15 14:05:05.453242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.635 [2024-07-15 14:05:05.453497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.635 [2024-07-15 14:05:05.453528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.635 [2024-07-15 14:05:05.458507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.635 [2024-07-15 14:05:05.458787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.635 [2024-07-15 14:05:05.458814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.635 [2024-07-15 14:05:05.463790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.635 [2024-07-15 14:05:05.464069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.635 [2024-07-15 14:05:05.464095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.635 [2024-07-15 14:05:05.469226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.635 [2024-07-15 14:05:05.469499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.635 [2024-07-15 14:05:05.469525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.905 [2024-07-15 14:05:05.475595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.905 [2024-07-15 14:05:05.475912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.905 [2024-07-15 14:05:05.475941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.905 [2024-07-15 14:05:05.481590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.905 [2024-07-15 14:05:05.481885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.905 [2024-07-15 14:05:05.481913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.905 [2024-07-15 14:05:05.487302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.905 [2024-07-15 14:05:05.487565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.905 [2024-07-15 14:05:05.487591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.905 [2024-07-15 14:05:05.492735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.905 [2024-07-15 14:05:05.493007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.905 [2024-07-15 14:05:05.493033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.905 [2024-07-15 14:05:05.498807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.905 [2024-07-15 14:05:05.499107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.905 [2024-07-15 14:05:05.499134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.905 [2024-07-15 14:05:05.506224] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.905 [2024-07-15 14:05:05.506490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.905 [2024-07-15 14:05:05.506515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.906 [2024-07-15 14:05:05.511940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.906 [2024-07-15 14:05:05.512231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.906 [2024-07-15 14:05:05.512257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.906 [2024-07-15 14:05:05.517701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.906 [2024-07-15 14:05:05.517999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.906 [2024-07-15 14:05:05.518025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.906 [2024-07-15 14:05:05.523602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.906 [2024-07-15 14:05:05.523891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.906 [2024-07-15 14:05:05.523918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.906 [2024-07-15 14:05:05.529373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.906 [2024-07-15 14:05:05.529636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.906 [2024-07-15 14:05:05.529662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.906 [2024-07-15 14:05:05.535835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.906 [2024-07-15 14:05:05.536119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.906 [2024-07-15 14:05:05.536146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.906 [2024-07-15 14:05:05.542834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.906 [2024-07-15 14:05:05.543117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.906 [2024-07-15 14:05:05.543142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.906 [2024-07-15 14:05:05.548780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.906 [2024-07-15 14:05:05.549067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.906 [2024-07-15 14:05:05.549109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.906 [2024-07-15 14:05:05.555687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.906 [2024-07-15 14:05:05.555995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.906 [2024-07-15 14:05:05.556037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.906 [2024-07-15 14:05:05.562143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.906 [2024-07-15 14:05:05.562406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.906 [2024-07-15 14:05:05.562432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.906 [2024-07-15 14:05:05.569070] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.906 [2024-07-15 14:05:05.569341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.906 [2024-07-15 14:05:05.569367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.906 [2024-07-15 14:05:05.576438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.906 [2024-07-15 14:05:05.576703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.906 [2024-07-15 14:05:05.576752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.906 [2024-07-15 14:05:05.583109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.906 [2024-07-15 14:05:05.583374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.906 [2024-07-15 14:05:05.583400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.906 [2024-07-15 14:05:05.589192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.906 [2024-07-15 14:05:05.589457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.906 [2024-07-15 14:05:05.589483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.906 [2024-07-15 14:05:05.595151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.906 [2024-07-15 14:05:05.595417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.906 [2024-07-15 14:05:05.595443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.906 [2024-07-15 14:05:05.601308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.906 [2024-07-15 14:05:05.601572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.906 [2024-07-15 14:05:05.601597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.906 [2024-07-15 14:05:05.607066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.906 [2024-07-15 14:05:05.607330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.906 [2024-07-15 14:05:05.607356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.906 [2024-07-15 14:05:05.614226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.906 [2024-07-15 14:05:05.614538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.906 [2024-07-15 14:05:05.614573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.906 [2024-07-15 14:05:05.621474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.906 [2024-07-15 14:05:05.621771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.906 [2024-07-15 14:05:05.621798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.906 [2024-07-15 14:05:05.627854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.906 [2024-07-15 14:05:05.628140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.906 [2024-07-15 14:05:05.628166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.906 [2024-07-15 14:05:05.633793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.906 [2024-07-15 14:05:05.634079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.906 [2024-07-15 14:05:05.634106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.906 [2024-07-15 14:05:05.640270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.906 [2024-07-15 14:05:05.640533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.906 [2024-07-15 14:05:05.640558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.906 [2024-07-15 14:05:05.647322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.906 [2024-07-15 14:05:05.647582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.906 [2024-07-15 14:05:05.647609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.906 [2024-07-15 14:05:05.654162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.906 [2024-07-15 14:05:05.654422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.906 [2024-07-15 14:05:05.654447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.906 [2024-07-15 14:05:05.661183] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.906 [2024-07-15 14:05:05.661454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.906 [2024-07-15 14:05:05.661480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.906 [2024-07-15 14:05:05.668222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.906 [2024-07-15 14:05:05.668490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.906 [2024-07-15 14:05:05.668516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.906 [2024-07-15 14:05:05.674630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.906 [2024-07-15 14:05:05.674922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.906 [2024-07-15 14:05:05.674949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.907 [2024-07-15 14:05:05.680397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.907 [2024-07-15 14:05:05.680656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.907 [2024-07-15 14:05:05.680682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.907 [2024-07-15 14:05:05.686344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.907 [2024-07-15 14:05:05.686655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.907 [2024-07-15 14:05:05.686681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.907 [2024-07-15 14:05:05.692340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.907 [2024-07-15 14:05:05.692603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.907 [2024-07-15 14:05:05.692629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.907 [2024-07-15 14:05:05.699128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.907 [2024-07-15 14:05:05.699390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.907 [2024-07-15 14:05:05.699416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.907 [2024-07-15 14:05:05.705669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.907 [2024-07-15 14:05:05.705955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.907 [2024-07-15 14:05:05.705982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.907 [2024-07-15 14:05:05.711847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.907 [2024-07-15 14:05:05.712130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.907 [2024-07-15 14:05:05.712156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.907 [2024-07-15 14:05:05.718044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.907 [2024-07-15 14:05:05.718326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.907 [2024-07-15 14:05:05.718352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.907 [2024-07-15 14:05:05.723913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.907 [2024-07-15 14:05:05.724209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.907 [2024-07-15 14:05:05.724235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.907 [2024-07-15 14:05:05.729629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.907 [2024-07-15 14:05:05.729919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.907 [2024-07-15 14:05:05.729948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.907 [2024-07-15 14:05:05.735219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.907 [2024-07-15 14:05:05.735476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.907 [2024-07-15 14:05:05.735503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.907 [2024-07-15 14:05:05.742147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:10.907 [2024-07-15 14:05:05.742413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.907 [2024-07-15 14:05:05.742439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.165 [2024-07-15 14:05:05.748060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:11.165 [2024-07-15 14:05:05.748331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.165 [2024-07-15 14:05:05.748358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.165 [2024-07-15 14:05:05.753756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:11.165 [2024-07-15 14:05:05.754047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.165 [2024-07-15 14:05:05.754074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.165 [2024-07-15 14:05:05.759289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:11.165 [2024-07-15 14:05:05.759548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.165 [2024-07-15 14:05:05.759574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.165 [2024-07-15 14:05:05.765883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf162e0) with pdu=0x2000190fef90 00:26:11.165 [2024-07-15 14:05:05.766156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.165 [2024-07-15 14:05:05.766182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.165 00:26:11.165 Latency(us) 00:26:11.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.165 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:11.165 nvme0n1 : 2.00 4515.18 564.40 0.00 0.00 3535.96 1844.72 11116.85 00:26:11.165 =================================================================================================================== 00:26:11.165 Total : 4515.18 564.40 0.00 0.00 3535.96 1844.72 11116.85 00:26:11.165 0 00:26:11.165 14:05:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:11.165 14:05:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:11.165 14:05:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:11.165 14:05:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:11.165 | .driver_specific 00:26:11.165 | .nvme_error 00:26:11.165 | .status_code 00:26:11.165 | .command_transient_transport_error' 00:26:11.422 14:05:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 291 > 0 )) 00:26:11.422 14:05:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3855168 00:26:11.422 14:05:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3855168 ']' 00:26:11.422 14:05:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3855168 00:26:11.422 14:05:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:11.423 14:05:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:11.423 14:05:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3855168 00:26:11.423 14:05:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:11.423 14:05:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:11.423 14:05:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3855168' 00:26:11.423 killing process with pid 3855168 00:26:11.423 14:05:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3855168 00:26:11.423 Received shutdown signal, test time was about 2.000000 seconds 00:26:11.423 00:26:11.423 Latency(us) 00:26:11.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.423 =================================================================================================================== 00:26:11.423 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:11.423 14:05:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3855168 00:26:11.680 14:05:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3853794 00:26:11.680 14:05:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3853794 ']' 00:26:11.680 14:05:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3853794 00:26:11.680 14:05:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:11.680 14:05:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:11.680 14:05:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3853794 00:26:11.680 14:05:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:11.680 14:05:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:11.680 14:05:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3853794' 00:26:11.680 killing process with pid 3853794 00:26:11.680 14:05:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3853794 00:26:11.680 14:05:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3853794 00:26:11.939 00:26:11.939 real 0m15.398s 00:26:11.939 user 0m29.717s 00:26:11.939 sys 0m5.063s 00:26:11.939 14:05:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:11.939 14:05:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:11.939 ************************************ 00:26:11.939 END TEST nvmf_digest_error 00:26:11.939 ************************************ 00:26:11.939 14:05:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:26:11.939 14:05:06 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:11.939 14:05:06 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:11.939 14:05:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:11.939 14:05:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:26:11.939 14:05:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:11.939 14:05:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:26:11.939 14:05:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:11.939 14:05:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:11.939 rmmod nvme_tcp 00:26:11.939 rmmod nvme_fabrics 00:26:11.939 rmmod nvme_keyring 00:26:11.939 14:05:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:11.939 14:05:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:26:11.939 14:05:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:26:11.939 14:05:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3853794 ']' 00:26:11.939 14:05:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3853794 00:26:11.939 14:05:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 3853794 ']' 00:26:11.939 14:05:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 3853794 00:26:11.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3853794) - No such process 00:26:11.939 14:05:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 3853794 is not found' 00:26:11.939 Process with pid 3853794 is not found 00:26:11.939 14:05:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:11.939 14:05:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:11.939 14:05:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:11.939 14:05:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:11.939 14:05:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:11.939 14:05:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.939 14:05:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:11.939 14:05:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.843 14:05:08 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:14.102 00:26:14.102 real 0m35.565s 00:26:14.102 user 1m0.973s 00:26:14.102 sys 0m11.737s 00:26:14.102 14:05:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:14.102 14:05:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:14.102 ************************************ 00:26:14.102 END TEST nvmf_digest 00:26:14.102 ************************************ 00:26:14.102 14:05:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:14.102 14:05:08 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:26:14.102 14:05:08 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:26:14.102 14:05:08 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:26:14.102 14:05:08 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:14.102 14:05:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:14.102 14:05:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:14.102 14:05:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:14.102 ************************************ 00:26:14.102 START TEST nvmf_bdevperf 00:26:14.102 ************************************ 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:14.102 * Looking for test storage... 00:26:14.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:26:14.102 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:14.103 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:14.103 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:14.103 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:14.103 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:14.103 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:14.103 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:14.103 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:14.103 14:05:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:14.103 14:05:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:14.103 14:05:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:14.103 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:14.103 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:14.103 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:14.103 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:14.103 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:14.103 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.103 14:05:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:14.103 14:05:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.103 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:14.103 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:14.103 14:05:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:26:14.103 14:05:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:16.003 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:16.003 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:16.003 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:16.003 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:16.003 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:16.003 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:16.003 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:16.003 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:26:16.003 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:16.003 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:26:16.003 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:26:16.003 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:26:16.003 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:26:16.003 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:26:16.003 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:16.003 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:16.003 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:16.003 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:16.003 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:16.003 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:16.003 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:16.003 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:16.003 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:16.003 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:16.003 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:16.003 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:16.003 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:16.003 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:16.003 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:16.283 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:16.284 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:16.284 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:16.284 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:16.284 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:16.284 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:16.284 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:16.284 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:16.284 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.284 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.284 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:16.285 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:16.285 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:16.285 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:16.285 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:16.285 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:16.285 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.285 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.285 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:16.285 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:16.285 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:16.285 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:16.285 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:16.285 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.285 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:16.285 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:16.285 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:16.288 Found net devices under 0000:84:00.0: cvl_0_0 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:16.288 Found net devices under 0000:84:00.1: cvl_0_1 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:16.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:16.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:26:16.288 00:26:16.288 --- 10.0.0.2 ping statistics --- 00:26:16.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.288 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:16.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:16.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:26:16.288 00:26:16.288 --- 10.0.0.1 ping statistics --- 00:26:16.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.288 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:16.288 14:05:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:26:16.288 14:05:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:16.288 14:05:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:16.288 14:05:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:16.288 14:05:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:16.288 14:05:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:16.288 14:05:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:16.288 14:05:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:16.288 14:05:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:16.288 14:05:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:16.288 14:05:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:16.288 14:05:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:16.288 14:05:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:16.288 14:05:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3857533 00:26:16.288 14:05:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:16.288 14:05:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3857533 00:26:16.288 14:05:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3857533 ']' 00:26:16.288 14:05:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.288 14:05:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:16.288 14:05:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.288 14:05:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:16.288 14:05:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:16.288 [2024-07-15 14:05:11.075385] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:26:16.288 [2024-07-15 14:05:11.075470] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:16.288 EAL: No free 2048 kB hugepages reported on node 1 00:26:16.546 [2024-07-15 14:05:11.140511] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:16.546 [2024-07-15 14:05:11.249818] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:16.546 [2024-07-15 14:05:11.249870] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:16.546 [2024-07-15 14:05:11.249898] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:16.546 [2024-07-15 14:05:11.249909] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:16.546 [2024-07-15 14:05:11.249919] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:16.546 [2024-07-15 14:05:11.250002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:16.546 [2024-07-15 14:05:11.250065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:16.546 [2024-07-15 14:05:11.250068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:17.478 [2024-07-15 14:05:12.029071] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:17.478 Malloc0 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:17.478 [2024-07-15 14:05:12.091397] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:17.478 { 00:26:17.478 "params": { 00:26:17.478 "name": "Nvme$subsystem", 00:26:17.478 "trtype": "$TEST_TRANSPORT", 00:26:17.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.478 "adrfam": "ipv4", 00:26:17.478 "trsvcid": "$NVMF_PORT", 00:26:17.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.478 "hdgst": ${hdgst:-false}, 00:26:17.478 "ddgst": ${ddgst:-false} 00:26:17.478 }, 00:26:17.478 "method": "bdev_nvme_attach_controller" 00:26:17.478 } 00:26:17.478 EOF 00:26:17.478 )") 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:17.478 14:05:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:17.478 "params": { 00:26:17.478 "name": "Nvme1", 00:26:17.478 "trtype": "tcp", 00:26:17.478 "traddr": "10.0.0.2", 00:26:17.478 "adrfam": "ipv4", 00:26:17.478 "trsvcid": "4420", 00:26:17.478 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:17.478 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:17.478 "hdgst": false, 00:26:17.478 "ddgst": false 00:26:17.478 }, 00:26:17.478 "method": "bdev_nvme_attach_controller" 00:26:17.478 }' 00:26:17.478 [2024-07-15 14:05:12.135329] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:26:17.478 [2024-07-15 14:05:12.135402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3857690 ] 00:26:17.478 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.478 [2024-07-15 14:05:12.195056] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.478 [2024-07-15 14:05:12.305584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.737 Running I/O for 1 seconds... 00:26:18.673 00:26:18.673 Latency(us) 00:26:18.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.673 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:18.673 Verification LBA range: start 0x0 length 0x4000 00:26:18.673 Nvme1n1 : 1.00 8861.64 34.62 0.00 0.00 14387.30 713.01 13981.01 00:26:18.673 =================================================================================================================== 00:26:18.673 Total : 8861.64 34.62 0.00 0.00 14387.30 713.01 13981.01 00:26:18.932 14:05:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3857832 00:26:18.932 14:05:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:18.932 14:05:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:18.932 14:05:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:18.932 14:05:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:18.932 14:05:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:18.932 14:05:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:18.932 14:05:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:18.932 { 00:26:18.932 "params": { 00:26:18.932 "name": "Nvme$subsystem", 00:26:18.932 "trtype": "$TEST_TRANSPORT", 00:26:18.932 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:18.932 "adrfam": "ipv4", 00:26:18.932 "trsvcid": "$NVMF_PORT", 00:26:18.932 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:18.932 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:18.932 "hdgst": ${hdgst:-false}, 00:26:18.932 "ddgst": ${ddgst:-false} 00:26:18.932 }, 00:26:18.932 "method": "bdev_nvme_attach_controller" 00:26:18.932 } 00:26:18.932 EOF 00:26:18.932 )") 00:26:18.932 14:05:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:18.932 14:05:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:18.932 14:05:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:18.932 14:05:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:18.932 "params": { 00:26:18.932 "name": "Nvme1", 00:26:18.932 "trtype": "tcp", 00:26:18.932 "traddr": "10.0.0.2", 00:26:18.932 "adrfam": "ipv4", 00:26:18.932 "trsvcid": "4420", 00:26:18.932 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:18.932 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:18.932 "hdgst": false, 00:26:18.932 "ddgst": false 00:26:18.932 }, 00:26:18.932 "method": "bdev_nvme_attach_controller" 00:26:18.932 }' 00:26:19.190 [2024-07-15 14:05:13.781271] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:26:19.190 [2024-07-15 14:05:13.781361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3857832 ] 00:26:19.190 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.190 [2024-07-15 14:05:13.844993] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.190 [2024-07-15 14:05:13.954417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.449 Running I/O for 15 seconds... 00:26:21.981 14:05:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3857533 00:26:21.981 14:05:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:21.981 [2024-07-15 14:05:16.753468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.981 [2024-07-15 14:05:16.753517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.981 [2024-07-15 14:05:16.753563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.981 [2024-07-15 14:05:16.753579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.981 [2024-07-15 14:05:16.753596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.981 [2024-07-15 14:05:16.753611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.981 [2024-07-15 14:05:16.753626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.981 [2024-07-15 14:05:16.753641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.981 [2024-07-15 14:05:16.753658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.981 [2024-07-15 14:05:16.753671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.981 [2024-07-15 14:05:16.753687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.981 [2024-07-15 14:05:16.753701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.981 [2024-07-15 14:05:16.753731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.981 [2024-07-15 14:05:16.753756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.981 [2024-07-15 14:05:16.753784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:41944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.981 [2024-07-15 14:05:16.753802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.981 [2024-07-15 14:05:16.753819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.981 [2024-07-15 14:05:16.753834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.981 [2024-07-15 14:05:16.753851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.981 [2024-07-15 14:05:16.753867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.981 [2024-07-15 14:05:16.753885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:41968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.981 [2024-07-15 14:05:16.753899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.981 [2024-07-15 14:05:16.753915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.981 [2024-07-15 14:05:16.753929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.981 [2024-07-15 14:05:16.753945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.981 [2024-07-15 14:05:16.753963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.981 [2024-07-15 14:05:16.753981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.981 [2024-07-15 14:05:16.753998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.981 [2024-07-15 14:05:16.754016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.981 [2024-07-15 14:05:16.754052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.981 [2024-07-15 14:05:16.754070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:41984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.981 [2024-07-15 14:05:16.754085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.981 [2024-07-15 14:05:16.754117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.981 [2024-07-15 14:05:16.754131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.981 [2024-07-15 14:05:16.754146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:42000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.981 [2024-07-15 14:05:16.754158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.981 [2024-07-15 14:05:16.754173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.754186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.754199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.754216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.754230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.754242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.754256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.754269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.754282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.754294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.754308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.754320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.754334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.754346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.754359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:42064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.754373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.754387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.754402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.754416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.754428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.754442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.754454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.754468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.754480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.754494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.754506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.754520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.754532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.754549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:42120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.754562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.754576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.754589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.754603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.754616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.754629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.754642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.754655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.754668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.754681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.754694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.754708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.754744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.754763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.754778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.754796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.754810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.754827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.754841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.754857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.754870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.754886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.754900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.754915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.754932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.754948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.754962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.754978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.754991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:42296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:42344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.982 [2024-07-15 14:05:16.755606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.982 [2024-07-15 14:05:16.755632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.982 [2024-07-15 14:05:16.755661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.982 [2024-07-15 14:05:16.755687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:42408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:42432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.755980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.755996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.756009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.756040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:42488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.756060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.756075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.756088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.756101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.756114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.982 [2024-07-15 14:05:16.756127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.982 [2024-07-15 14:05:16.756140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.756166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.756192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.983 [2024-07-15 14:05:16.756218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.756244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.756270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.756296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.756322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.756347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.756373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.756403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.756430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.756455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.756482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.756509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.756535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.756560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.756587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.756612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.756638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.983 [2024-07-15 14:05:16.756664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.983 [2024-07-15 14:05:16.756690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.983 [2024-07-15 14:05:16.756719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.983 [2024-07-15 14:05:16.756781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.983 [2024-07-15 14:05:16.756811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.983 [2024-07-15 14:05:16.756840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.983 [2024-07-15 14:05:16.756870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.756899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.756929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.756959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.756975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.756988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.757007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.757021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.757051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.757065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.757079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.757107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.757122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.757134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.757147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.757163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.757177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.757190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.757203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.757216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.757229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.757241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.757255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.757268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.757282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.757294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.757308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.983 [2024-07-15 14:05:16.757320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.757332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197ce60 is same with the state(5) to be set 00:26:21.983 [2024-07-15 14:05:16.757347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:21.983 [2024-07-15 14:05:16.757357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:21.983 [2024-07-15 14:05:16.757368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42784 len:8 PRP1 0x0 PRP2 0x0 00:26:21.983 [2024-07-15 14:05:16.757380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.757439] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x197ce60 was disconnected and freed. reset controller. 00:26:21.983 [2024-07-15 14:05:16.757515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.983 [2024-07-15 14:05:16.757534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.757548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.983 [2024-07-15 14:05:16.757560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.757573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.983 [2024-07-15 14:05:16.757585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.757597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.983 [2024-07-15 14:05:16.757613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.983 [2024-07-15 14:05:16.757625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:21.983 [2024-07-15 14:05:16.760957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.983 [2024-07-15 14:05:16.761001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:21.983 [2024-07-15 14:05:16.761634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.983 [2024-07-15 14:05:16.761691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:21.983 [2024-07-15 14:05:16.761706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:21.983 [2024-07-15 14:05:16.761943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:21.983 [2024-07-15 14:05:16.762180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.983 [2024-07-15 14:05:16.762198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.983 [2024-07-15 14:05:16.762213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.983 [2024-07-15 14:05:16.765184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.983 [2024-07-15 14:05:16.774210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.983 [2024-07-15 14:05:16.774646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.983 [2024-07-15 14:05:16.774685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:21.983 [2024-07-15 14:05:16.774700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:21.983 [2024-07-15 14:05:16.774940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:21.983 [2024-07-15 14:05:16.775159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.983 [2024-07-15 14:05:16.775178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.983 [2024-07-15 14:05:16.775191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.983 [2024-07-15 14:05:16.778183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.983 [2024-07-15 14:05:16.787446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.983 [2024-07-15 14:05:16.787870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.983 [2024-07-15 14:05:16.787894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:21.983 [2024-07-15 14:05:16.787924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:21.983 [2024-07-15 14:05:16.788112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:21.983 [2024-07-15 14:05:16.788304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.983 [2024-07-15 14:05:16.788322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.983 [2024-07-15 14:05:16.788334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.983 [2024-07-15 14:05:16.791272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.983 [2024-07-15 14:05:16.800620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.983 [2024-07-15 14:05:16.800969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.983 [2024-07-15 14:05:16.801005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:21.983 [2024-07-15 14:05:16.801019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:21.983 [2024-07-15 14:05:16.801224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:21.983 [2024-07-15 14:05:16.801416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.983 [2024-07-15 14:05:16.801434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.983 [2024-07-15 14:05:16.801447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.983 [2024-07-15 14:05:16.804399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.983 [2024-07-15 14:05:16.813924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.983 [2024-07-15 14:05:16.814344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.983 [2024-07-15 14:05:16.814368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:21.983 [2024-07-15 14:05:16.814382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:21.983 [2024-07-15 14:05:16.814585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:21.983 [2024-07-15 14:05:16.814809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.983 [2024-07-15 14:05:16.814829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.984 [2024-07-15 14:05:16.814841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.984 [2024-07-15 14:05:16.817906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.244 [2024-07-15 14:05:16.827262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.244 [2024-07-15 14:05:16.827648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.244 [2024-07-15 14:05:16.827695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.244 [2024-07-15 14:05:16.827710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.244 [2024-07-15 14:05:16.827947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.244 [2024-07-15 14:05:16.828158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.244 [2024-07-15 14:05:16.828177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.244 [2024-07-15 14:05:16.828190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.244 [2024-07-15 14:05:16.831089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.244 [2024-07-15 14:05:16.840432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.244 [2024-07-15 14:05:16.840772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.244 [2024-07-15 14:05:16.840798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.244 [2024-07-15 14:05:16.840818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.244 [2024-07-15 14:05:16.841027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.244 [2024-07-15 14:05:16.841220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.244 [2024-07-15 14:05:16.841238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.244 [2024-07-15 14:05:16.841250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.244 [2024-07-15 14:05:16.844164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.244 [2024-07-15 14:05:16.853532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.244 [2024-07-15 14:05:16.853956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.244 [2024-07-15 14:05:16.853982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.244 [2024-07-15 14:05:16.854012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.244 [2024-07-15 14:05:16.854252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.244 [2024-07-15 14:05:16.854444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.244 [2024-07-15 14:05:16.854462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.244 [2024-07-15 14:05:16.854474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.244 [2024-07-15 14:05:16.857349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.244 [2024-07-15 14:05:16.866795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.244 [2024-07-15 14:05:16.867277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.244 [2024-07-15 14:05:16.867315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.244 [2024-07-15 14:05:16.867330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.244 [2024-07-15 14:05:16.867524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.244 [2024-07-15 14:05:16.867721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.244 [2024-07-15 14:05:16.867764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.244 [2024-07-15 14:05:16.867780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.244 [2024-07-15 14:05:16.870762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.244 [2024-07-15 14:05:16.880038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.244 [2024-07-15 14:05:16.880504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.244 [2024-07-15 14:05:16.880542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.244 [2024-07-15 14:05:16.880557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.245 [2024-07-15 14:05:16.880770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.245 [2024-07-15 14:05:16.880998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.245 [2024-07-15 14:05:16.881022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.245 [2024-07-15 14:05:16.881036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.245 [2024-07-15 14:05:16.884016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.245 [2024-07-15 14:05:16.893233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.245 [2024-07-15 14:05:16.893573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.245 [2024-07-15 14:05:16.893597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.245 [2024-07-15 14:05:16.893612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.245 [2024-07-15 14:05:16.893830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.245 [2024-07-15 14:05:16.894049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.245 [2024-07-15 14:05:16.894069] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.245 [2024-07-15 14:05:16.894082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.245 [2024-07-15 14:05:16.896970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.245 [2024-07-15 14:05:16.906432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.245 [2024-07-15 14:05:16.906803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.245 [2024-07-15 14:05:16.906843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.245 [2024-07-15 14:05:16.906857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.245 [2024-07-15 14:05:16.907079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.245 [2024-07-15 14:05:16.907272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.245 [2024-07-15 14:05:16.907291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.245 [2024-07-15 14:05:16.907303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.245 [2024-07-15 14:05:16.910229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.245 [2024-07-15 14:05:16.919566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.245 [2024-07-15 14:05:16.919967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.245 [2024-07-15 14:05:16.919992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.245 [2024-07-15 14:05:16.920006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.245 [2024-07-15 14:05:16.920225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.245 [2024-07-15 14:05:16.920417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.245 [2024-07-15 14:05:16.920436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.245 [2024-07-15 14:05:16.920448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.245 [2024-07-15 14:05:16.923359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.245 [2024-07-15 14:05:16.932796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.245 [2024-07-15 14:05:16.933283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.245 [2024-07-15 14:05:16.933331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.245 [2024-07-15 14:05:16.933345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.245 [2024-07-15 14:05:16.933554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.245 [2024-07-15 14:05:16.933778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.245 [2024-07-15 14:05:16.933814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.245 [2024-07-15 14:05:16.933827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.245 [2024-07-15 14:05:16.936887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.245 [2024-07-15 14:05:16.946102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.245 [2024-07-15 14:05:16.946559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.245 [2024-07-15 14:05:16.946584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.245 [2024-07-15 14:05:16.946613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.245 [2024-07-15 14:05:16.946851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.245 [2024-07-15 14:05:16.947062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.245 [2024-07-15 14:05:16.947082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.245 [2024-07-15 14:05:16.947095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.245 [2024-07-15 14:05:16.950083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.245 [2024-07-15 14:05:16.959210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.245 [2024-07-15 14:05:16.959644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.245 [2024-07-15 14:05:16.959682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.245 [2024-07-15 14:05:16.959696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.245 [2024-07-15 14:05:16.959912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.245 [2024-07-15 14:05:16.960123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.245 [2024-07-15 14:05:16.960142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.245 [2024-07-15 14:05:16.960154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.245 [2024-07-15 14:05:16.963082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.245 [2024-07-15 14:05:16.972363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.245 [2024-07-15 14:05:16.972796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.245 [2024-07-15 14:05:16.972834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.245 [2024-07-15 14:05:16.972849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.245 [2024-07-15 14:05:16.973043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.245 [2024-07-15 14:05:16.973234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.245 [2024-07-15 14:05:16.973252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.245 [2024-07-15 14:05:16.973264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.245 [2024-07-15 14:05:16.976222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.245 [2024-07-15 14:05:16.985424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.245 [2024-07-15 14:05:16.985884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.245 [2024-07-15 14:05:16.985908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.245 [2024-07-15 14:05:16.985936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.245 [2024-07-15 14:05:16.986124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.245 [2024-07-15 14:05:16.986316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.245 [2024-07-15 14:05:16.986334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.245 [2024-07-15 14:05:16.986347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.245 [2024-07-15 14:05:16.989270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.245 [2024-07-15 14:05:16.998585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.245 [2024-07-15 14:05:16.999005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.245 [2024-07-15 14:05:16.999029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.245 [2024-07-15 14:05:16.999058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.245 [2024-07-15 14:05:16.999247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.245 [2024-07-15 14:05:16.999438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.245 [2024-07-15 14:05:16.999456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.245 [2024-07-15 14:05:16.999468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.245 [2024-07-15 14:05:17.002355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.245 [2024-07-15 14:05:17.011604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.245 [2024-07-15 14:05:17.012107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.245 [2024-07-15 14:05:17.012146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.245 [2024-07-15 14:05:17.012161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.245 [2024-07-15 14:05:17.012361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.245 [2024-07-15 14:05:17.012565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.245 [2024-07-15 14:05:17.012584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.245 [2024-07-15 14:05:17.012602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.245 [2024-07-15 14:05:17.015914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.245 [2024-07-15 14:05:17.025421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.246 [2024-07-15 14:05:17.025930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.246 [2024-07-15 14:05:17.025977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.246 [2024-07-15 14:05:17.025993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.246 [2024-07-15 14:05:17.026235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.246 [2024-07-15 14:05:17.026453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.246 [2024-07-15 14:05:17.026474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.246 [2024-07-15 14:05:17.026488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.246 [2024-07-15 14:05:17.029646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.246 [2024-07-15 14:05:17.038646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.246 [2024-07-15 14:05:17.039067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.246 [2024-07-15 14:05:17.039116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.246 [2024-07-15 14:05:17.039130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.246 [2024-07-15 14:05:17.039331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.246 [2024-07-15 14:05:17.039523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.246 [2024-07-15 14:05:17.039542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.246 [2024-07-15 14:05:17.039553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.246 [2024-07-15 14:05:17.042559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.246 [2024-07-15 14:05:17.051931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.246 [2024-07-15 14:05:17.052409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.246 [2024-07-15 14:05:17.052455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.246 [2024-07-15 14:05:17.052468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.246 [2024-07-15 14:05:17.052670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.246 [2024-07-15 14:05:17.052913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.246 [2024-07-15 14:05:17.052934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.246 [2024-07-15 14:05:17.052946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.246 [2024-07-15 14:05:17.055851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.246 [2024-07-15 14:05:17.065066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.246 [2024-07-15 14:05:17.065526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.246 [2024-07-15 14:05:17.065564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.246 [2024-07-15 14:05:17.065579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.246 [2024-07-15 14:05:17.065795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.246 [2024-07-15 14:05:17.065999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.246 [2024-07-15 14:05:17.066019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.246 [2024-07-15 14:05:17.066032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.246 [2024-07-15 14:05:17.068913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.246 [2024-07-15 14:05:17.078178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.246 [2024-07-15 14:05:17.078655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.246 [2024-07-15 14:05:17.078694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.246 [2024-07-15 14:05:17.078709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.246 [2024-07-15 14:05:17.078971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.246 [2024-07-15 14:05:17.079205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.246 [2024-07-15 14:05:17.079226] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.246 [2024-07-15 14:05:17.079239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.246 [2024-07-15 14:05:17.082280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.505 [2024-07-15 14:05:17.091625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.505 [2024-07-15 14:05:17.091995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.505 [2024-07-15 14:05:17.092028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.505 [2024-07-15 14:05:17.092056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.505 [2024-07-15 14:05:17.092244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.505 [2024-07-15 14:05:17.092436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.505 [2024-07-15 14:05:17.092455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.505 [2024-07-15 14:05:17.092467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.505 [2024-07-15 14:05:17.095354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.505 [2024-07-15 14:05:17.104843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.505 [2024-07-15 14:05:17.105257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.505 [2024-07-15 14:05:17.105280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.505 [2024-07-15 14:05:17.105307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.505 [2024-07-15 14:05:17.105501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.506 [2024-07-15 14:05:17.105693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.506 [2024-07-15 14:05:17.105711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.506 [2024-07-15 14:05:17.105747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.506 [2024-07-15 14:05:17.108710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.506 [2024-07-15 14:05:17.118127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.506 [2024-07-15 14:05:17.118475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.506 [2024-07-15 14:05:17.118521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.506 [2024-07-15 14:05:17.118535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.506 [2024-07-15 14:05:17.118747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.506 [2024-07-15 14:05:17.118952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.506 [2024-07-15 14:05:17.118971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.506 [2024-07-15 14:05:17.118984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.506 [2024-07-15 14:05:17.121921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.506 [2024-07-15 14:05:17.131403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.506 [2024-07-15 14:05:17.131773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.506 [2024-07-15 14:05:17.131815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.506 [2024-07-15 14:05:17.131830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.506 [2024-07-15 14:05:17.132062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.506 [2024-07-15 14:05:17.132254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.506 [2024-07-15 14:05:17.132273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.506 [2024-07-15 14:05:17.132285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.506 [2024-07-15 14:05:17.135287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.506 [2024-07-15 14:05:17.144626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.506 [2024-07-15 14:05:17.145067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.506 [2024-07-15 14:05:17.145091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.506 [2024-07-15 14:05:17.145104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.506 [2024-07-15 14:05:17.145306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.506 [2024-07-15 14:05:17.145498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.506 [2024-07-15 14:05:17.145516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.506 [2024-07-15 14:05:17.145533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.506 [2024-07-15 14:05:17.148457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.506 [2024-07-15 14:05:17.157608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.506 [2024-07-15 14:05:17.157970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.506 [2024-07-15 14:05:17.158008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.506 [2024-07-15 14:05:17.158022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.506 [2024-07-15 14:05:17.158224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.506 [2024-07-15 14:05:17.158415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.506 [2024-07-15 14:05:17.158434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.506 [2024-07-15 14:05:17.158446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.506 [2024-07-15 14:05:17.161328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.506 [2024-07-15 14:05:17.170833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.506 [2024-07-15 14:05:17.171209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.506 [2024-07-15 14:05:17.171248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.506 [2024-07-15 14:05:17.171261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.506 [2024-07-15 14:05:17.171463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.506 [2024-07-15 14:05:17.171655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.506 [2024-07-15 14:05:17.171674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.506 [2024-07-15 14:05:17.171686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.506 [2024-07-15 14:05:17.174598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.506 [2024-07-15 14:05:17.183956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.506 [2024-07-15 14:05:17.184323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.506 [2024-07-15 14:05:17.184363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.506 [2024-07-15 14:05:17.184376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.506 [2024-07-15 14:05:17.184578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.506 [2024-07-15 14:05:17.184796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.506 [2024-07-15 14:05:17.184816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.506 [2024-07-15 14:05:17.184828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.506 [2024-07-15 14:05:17.187610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.506 [2024-07-15 14:05:17.197215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.506 [2024-07-15 14:05:17.197574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.506 [2024-07-15 14:05:17.197620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.506 [2024-07-15 14:05:17.197635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.506 [2024-07-15 14:05:17.197865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.506 [2024-07-15 14:05:17.198078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.506 [2024-07-15 14:05:17.198096] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.506 [2024-07-15 14:05:17.198109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.506 [2024-07-15 14:05:17.201042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.506 [2024-07-15 14:05:17.210440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.506 [2024-07-15 14:05:17.210784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.506 [2024-07-15 14:05:17.210811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.506 [2024-07-15 14:05:17.210827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.506 [2024-07-15 14:05:17.211028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.506 [2024-07-15 14:05:17.211251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.506 [2024-07-15 14:05:17.211270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.506 [2024-07-15 14:05:17.211283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.506 [2024-07-15 14:05:17.214196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.506 [2024-07-15 14:05:17.223937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.506 [2024-07-15 14:05:17.224371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.506 [2024-07-15 14:05:17.224396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.506 [2024-07-15 14:05:17.224411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.506 [2024-07-15 14:05:17.224631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.506 [2024-07-15 14:05:17.224871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.506 [2024-07-15 14:05:17.224894] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.506 [2024-07-15 14:05:17.224908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.506 [2024-07-15 14:05:17.227987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.507 [2024-07-15 14:05:17.237181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.507 [2024-07-15 14:05:17.237522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.507 [2024-07-15 14:05:17.237547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.507 [2024-07-15 14:05:17.237561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.507 [2024-07-15 14:05:17.237774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.507 [2024-07-15 14:05:17.237977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.507 [2024-07-15 14:05:17.237996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.507 [2024-07-15 14:05:17.238009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.507 [2024-07-15 14:05:17.241103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.507 [2024-07-15 14:05:17.250848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.507 [2024-07-15 14:05:17.251262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.507 [2024-07-15 14:05:17.251286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.507 [2024-07-15 14:05:17.251300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.507 [2024-07-15 14:05:17.251529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.507 [2024-07-15 14:05:17.251759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.507 [2024-07-15 14:05:17.251781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.507 [2024-07-15 14:05:17.251795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.507 [2024-07-15 14:05:17.255053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.507 [2024-07-15 14:05:17.264276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.507 [2024-07-15 14:05:17.264686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.507 [2024-07-15 14:05:17.264728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.507 [2024-07-15 14:05:17.264751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.507 [2024-07-15 14:05:17.264980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.507 [2024-07-15 14:05:17.265227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.507 [2024-07-15 14:05:17.265248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.507 [2024-07-15 14:05:17.265262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.507 [2024-07-15 14:05:17.268671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.507 [2024-07-15 14:05:17.277770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.507 [2024-07-15 14:05:17.278139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.507 [2024-07-15 14:05:17.278165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.507 [2024-07-15 14:05:17.278180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.507 [2024-07-15 14:05:17.278397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.507 [2024-07-15 14:05:17.278617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.507 [2024-07-15 14:05:17.278637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.507 [2024-07-15 14:05:17.278650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.507 [2024-07-15 14:05:17.281833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.507 [2024-07-15 14:05:17.291380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.507 [2024-07-15 14:05:17.291783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.507 [2024-07-15 14:05:17.291812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.507 [2024-07-15 14:05:17.291828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.507 [2024-07-15 14:05:17.292057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.507 [2024-07-15 14:05:17.292295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.507 [2024-07-15 14:05:17.292315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.507 [2024-07-15 14:05:17.292329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.507 [2024-07-15 14:05:17.295536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.507 [2024-07-15 14:05:17.304933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.507 [2024-07-15 14:05:17.305350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.507 [2024-07-15 14:05:17.305386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.507 [2024-07-15 14:05:17.305417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.507 [2024-07-15 14:05:17.305617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.507 [2024-07-15 14:05:17.305855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.507 [2024-07-15 14:05:17.305877] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.507 [2024-07-15 14:05:17.305892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.507 [2024-07-15 14:05:17.308919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.507 [2024-07-15 14:05:17.318300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.507 [2024-07-15 14:05:17.318671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.507 [2024-07-15 14:05:17.318710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.507 [2024-07-15 14:05:17.318723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.507 [2024-07-15 14:05:17.318955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.507 [2024-07-15 14:05:17.319174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.507 [2024-07-15 14:05:17.319192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.507 [2024-07-15 14:05:17.319205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.507 [2024-07-15 14:05:17.322224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.507 [2024-07-15 14:05:17.331638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.507 [2024-07-15 14:05:17.332045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.507 [2024-07-15 14:05:17.332069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.507 [2024-07-15 14:05:17.332088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.507 [2024-07-15 14:05:17.332277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.507 [2024-07-15 14:05:17.332468] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.507 [2024-07-15 14:05:17.332487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.507 [2024-07-15 14:05:17.332499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.507 [2024-07-15 14:05:17.335473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.507 [2024-07-15 14:05:17.345162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.767 [2024-07-15 14:05:17.345524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-07-15 14:05:17.345549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.767 [2024-07-15 14:05:17.345564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.767 [2024-07-15 14:05:17.345778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.767 [2024-07-15 14:05:17.346019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.767 [2024-07-15 14:05:17.346040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.767 [2024-07-15 14:05:17.346054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.767 [2024-07-15 14:05:17.348994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.767 [2024-07-15 14:05:17.358384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.767 [2024-07-15 14:05:17.358750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-07-15 14:05:17.358776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.767 [2024-07-15 14:05:17.358790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.767 [2024-07-15 14:05:17.358984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.767 [2024-07-15 14:05:17.359192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.767 [2024-07-15 14:05:17.359210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.767 [2024-07-15 14:05:17.359222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.767 [2024-07-15 14:05:17.362144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.767 [2024-07-15 14:05:17.371560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.767 [2024-07-15 14:05:17.371964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-07-15 14:05:17.371989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.767 [2024-07-15 14:05:17.372003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.767 [2024-07-15 14:05:17.372222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.767 [2024-07-15 14:05:17.372414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.767 [2024-07-15 14:05:17.372437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.767 [2024-07-15 14:05:17.372450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.767 [2024-07-15 14:05:17.375331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.767 [2024-07-15 14:05:17.384833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.767 [2024-07-15 14:05:17.385187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-07-15 14:05:17.385212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.767 [2024-07-15 14:05:17.385225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.767 [2024-07-15 14:05:17.385413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.767 [2024-07-15 14:05:17.385605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.767 [2024-07-15 14:05:17.385623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.767 [2024-07-15 14:05:17.385635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.767 [2024-07-15 14:05:17.388529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.767 [2024-07-15 14:05:17.397814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.767 [2024-07-15 14:05:17.398180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-07-15 14:05:17.398204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.767 [2024-07-15 14:05:17.398218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.767 [2024-07-15 14:05:17.398406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.767 [2024-07-15 14:05:17.398597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.767 [2024-07-15 14:05:17.398615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.767 [2024-07-15 14:05:17.398627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.767 [2024-07-15 14:05:17.401437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.767 [2024-07-15 14:05:17.410808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.767 [2024-07-15 14:05:17.411143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-07-15 14:05:17.411167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.767 [2024-07-15 14:05:17.411181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.767 [2024-07-15 14:05:17.411369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.767 [2024-07-15 14:05:17.411561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.767 [2024-07-15 14:05:17.411578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.767 [2024-07-15 14:05:17.411590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.767 [2024-07-15 14:05:17.414401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.767 [2024-07-15 14:05:17.423864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.767 [2024-07-15 14:05:17.424218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-07-15 14:05:17.424243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.767 [2024-07-15 14:05:17.424256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.767 [2024-07-15 14:05:17.424444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.767 [2024-07-15 14:05:17.424636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.767 [2024-07-15 14:05:17.424655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.767 [2024-07-15 14:05:17.424667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.767 [2024-07-15 14:05:17.427576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.767 [2024-07-15 14:05:17.437019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.767 [2024-07-15 14:05:17.437373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-07-15 14:05:17.437397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.767 [2024-07-15 14:05:17.437411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.767 [2024-07-15 14:05:17.437599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.767 [2024-07-15 14:05:17.437821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.767 [2024-07-15 14:05:17.437841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.767 [2024-07-15 14:05:17.437854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.767 [2024-07-15 14:05:17.440721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.767 [2024-07-15 14:05:17.450071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.767 [2024-07-15 14:05:17.450427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-07-15 14:05:17.450452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.767 [2024-07-15 14:05:17.450465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.767 [2024-07-15 14:05:17.450654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.767 [2024-07-15 14:05:17.450889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.767 [2024-07-15 14:05:17.450910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.767 [2024-07-15 14:05:17.450923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.767 [2024-07-15 14:05:17.453850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.767 [2024-07-15 14:05:17.463066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.767 [2024-07-15 14:05:17.463405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-07-15 14:05:17.463429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.767 [2024-07-15 14:05:17.463443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.767 [2024-07-15 14:05:17.463635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.767 [2024-07-15 14:05:17.463870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.767 [2024-07-15 14:05:17.463891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.767 [2024-07-15 14:05:17.463904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.767 [2024-07-15 14:05:17.466794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.767 [2024-07-15 14:05:17.476176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.767 [2024-07-15 14:05:17.476526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.767 [2024-07-15 14:05:17.476551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.767 [2024-07-15 14:05:17.476565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.767 [2024-07-15 14:05:17.476778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.768 [2024-07-15 14:05:17.476977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.768 [2024-07-15 14:05:17.476996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.768 [2024-07-15 14:05:17.477008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.768 [2024-07-15 14:05:17.479793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.768 [2024-07-15 14:05:17.489246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.768 [2024-07-15 14:05:17.489657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-07-15 14:05:17.489681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.768 [2024-07-15 14:05:17.489708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.768 [2024-07-15 14:05:17.489929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.768 [2024-07-15 14:05:17.490145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.768 [2024-07-15 14:05:17.490164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.768 [2024-07-15 14:05:17.490176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.768 [2024-07-15 14:05:17.493062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.768 [2024-07-15 14:05:17.502334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.768 [2024-07-15 14:05:17.502703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-07-15 14:05:17.502747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.768 [2024-07-15 14:05:17.502762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.768 [2024-07-15 14:05:17.502970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.768 [2024-07-15 14:05:17.503180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.768 [2024-07-15 14:05:17.503199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.768 [2024-07-15 14:05:17.503215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.768 [2024-07-15 14:05:17.506165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.768 [2024-07-15 14:05:17.515366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.768 [2024-07-15 14:05:17.515771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-07-15 14:05:17.515798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.768 [2024-07-15 14:05:17.515828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.768 [2024-07-15 14:05:17.516042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.768 [2024-07-15 14:05:17.516309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.768 [2024-07-15 14:05:17.516329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.768 [2024-07-15 14:05:17.516343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.768 [2024-07-15 14:05:17.519823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.768 [2024-07-15 14:05:17.528583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.768 [2024-07-15 14:05:17.528982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-07-15 14:05:17.529007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.768 [2024-07-15 14:05:17.529021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.768 [2024-07-15 14:05:17.529224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.768 [2024-07-15 14:05:17.529415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.768 [2024-07-15 14:05:17.529433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.768 [2024-07-15 14:05:17.529445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.768 [2024-07-15 14:05:17.532364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.768 [2024-07-15 14:05:17.541712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.768 [2024-07-15 14:05:17.542071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-07-15 14:05:17.542110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.768 [2024-07-15 14:05:17.542123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.768 [2024-07-15 14:05:17.542325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.768 [2024-07-15 14:05:17.542517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.768 [2024-07-15 14:05:17.542535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.768 [2024-07-15 14:05:17.542547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.768 [2024-07-15 14:05:17.545355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.768 [2024-07-15 14:05:17.554853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.768 [2024-07-15 14:05:17.555212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-07-15 14:05:17.555236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.768 [2024-07-15 14:05:17.555265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.768 [2024-07-15 14:05:17.555466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.768 [2024-07-15 14:05:17.555658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.768 [2024-07-15 14:05:17.555676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.768 [2024-07-15 14:05:17.555688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.768 [2024-07-15 14:05:17.558600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.768 [2024-07-15 14:05:17.567882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.768 [2024-07-15 14:05:17.568239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-07-15 14:05:17.568278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.768 [2024-07-15 14:05:17.568292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.768 [2024-07-15 14:05:17.568495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.768 [2024-07-15 14:05:17.568686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.768 [2024-07-15 14:05:17.568704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.768 [2024-07-15 14:05:17.568717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.768 [2024-07-15 14:05:17.571632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.768 [2024-07-15 14:05:17.580879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.768 [2024-07-15 14:05:17.581288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-07-15 14:05:17.581311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.768 [2024-07-15 14:05:17.581339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.768 [2024-07-15 14:05:17.581527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.768 [2024-07-15 14:05:17.581719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.768 [2024-07-15 14:05:17.581743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.768 [2024-07-15 14:05:17.581772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.768 [2024-07-15 14:05:17.584647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.768 [2024-07-15 14:05:17.593898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.768 [2024-07-15 14:05:17.594222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.768 [2024-07-15 14:05:17.594246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:22.768 [2024-07-15 14:05:17.594260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:22.768 [2024-07-15 14:05:17.594449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:22.768 [2024-07-15 14:05:17.594645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.768 [2024-07-15 14:05:17.594663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.768 [2024-07-15 14:05:17.594675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.768 [2024-07-15 14:05:17.597585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.028 [2024-07-15 14:05:17.607265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.028 [2024-07-15 14:05:17.607657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.028 [2024-07-15 14:05:17.607696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.028 [2024-07-15 14:05:17.607710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.028 [2024-07-15 14:05:17.607945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.028 [2024-07-15 14:05:17.608162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.028 [2024-07-15 14:05:17.608181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.028 [2024-07-15 14:05:17.608193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.028 [2024-07-15 14:05:17.611151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.028 [2024-07-15 14:05:17.620309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.028 [2024-07-15 14:05:17.620664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.028 [2024-07-15 14:05:17.620689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.028 [2024-07-15 14:05:17.620702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.028 [2024-07-15 14:05:17.620937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.028 [2024-07-15 14:05:17.621157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.028 [2024-07-15 14:05:17.621190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.028 [2024-07-15 14:05:17.621202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.028 [2024-07-15 14:05:17.624087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.028 [2024-07-15 14:05:17.633284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.028 [2024-07-15 14:05:17.633620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.028 [2024-07-15 14:05:17.633644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.028 [2024-07-15 14:05:17.633657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.028 [2024-07-15 14:05:17.633874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.028 [2024-07-15 14:05:17.634087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.028 [2024-07-15 14:05:17.634105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.028 [2024-07-15 14:05:17.634117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.028 [2024-07-15 14:05:17.636931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.029 [2024-07-15 14:05:17.646290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.029 [2024-07-15 14:05:17.646645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.029 [2024-07-15 14:05:17.646669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.029 [2024-07-15 14:05:17.646697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.029 [2024-07-15 14:05:17.646926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.029 [2024-07-15 14:05:17.647139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.029 [2024-07-15 14:05:17.647158] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.029 [2024-07-15 14:05:17.647170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.029 [2024-07-15 14:05:17.649939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.029 [2024-07-15 14:05:17.659275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.029 [2024-07-15 14:05:17.659632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.029 [2024-07-15 14:05:17.659656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.029 [2024-07-15 14:05:17.659684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.029 [2024-07-15 14:05:17.659913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.029 [2024-07-15 14:05:17.660125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.029 [2024-07-15 14:05:17.660144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.029 [2024-07-15 14:05:17.660156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.029 [2024-07-15 14:05:17.662925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.029 [2024-07-15 14:05:17.672258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.029 [2024-07-15 14:05:17.672595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.029 [2024-07-15 14:05:17.672619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.029 [2024-07-15 14:05:17.672633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.029 [2024-07-15 14:05:17.672848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.029 [2024-07-15 14:05:17.673046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.029 [2024-07-15 14:05:17.673080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.029 [2024-07-15 14:05:17.673092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.029 [2024-07-15 14:05:17.675860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.029 [2024-07-15 14:05:17.685313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.029 [2024-07-15 14:05:17.685722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.029 [2024-07-15 14:05:17.685776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.029 [2024-07-15 14:05:17.685791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.029 [2024-07-15 14:05:17.685993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.029 [2024-07-15 14:05:17.686185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.029 [2024-07-15 14:05:17.686203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.029 [2024-07-15 14:05:17.686215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.029 [2024-07-15 14:05:17.689023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.029 [2024-07-15 14:05:17.698363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.029 [2024-07-15 14:05:17.698748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.029 [2024-07-15 14:05:17.698791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.029 [2024-07-15 14:05:17.698805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.029 [2024-07-15 14:05:17.699007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.029 [2024-07-15 14:05:17.699199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.029 [2024-07-15 14:05:17.699217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.029 [2024-07-15 14:05:17.699229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.029 [2024-07-15 14:05:17.702100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.029 [2024-07-15 14:05:17.711430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.029 [2024-07-15 14:05:17.711783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.029 [2024-07-15 14:05:17.711808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.029 [2024-07-15 14:05:17.711821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.029 [2024-07-15 14:05:17.712010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.029 [2024-07-15 14:05:17.712201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.029 [2024-07-15 14:05:17.712220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.029 [2024-07-15 14:05:17.712232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.029 [2024-07-15 14:05:17.715144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.029 [2024-07-15 14:05:17.724587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.029 [2024-07-15 14:05:17.724963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.029 [2024-07-15 14:05:17.725002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.029 [2024-07-15 14:05:17.725015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.029 [2024-07-15 14:05:17.725217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.029 [2024-07-15 14:05:17.725413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.029 [2024-07-15 14:05:17.725432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.029 [2024-07-15 14:05:17.725444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.029 [2024-07-15 14:05:17.728324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.029 [2024-07-15 14:05:17.737766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.029 [2024-07-15 14:05:17.738117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.029 [2024-07-15 14:05:17.738141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.029 [2024-07-15 14:05:17.738168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.029 [2024-07-15 14:05:17.738370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.029 [2024-07-15 14:05:17.738562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.029 [2024-07-15 14:05:17.738580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.029 [2024-07-15 14:05:17.738592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.029 [2024-07-15 14:05:17.741504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.029 [2024-07-15 14:05:17.750746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.029 [2024-07-15 14:05:17.751100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.029 [2024-07-15 14:05:17.751138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.029 [2024-07-15 14:05:17.751151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.029 [2024-07-15 14:05:17.751353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.029 [2024-07-15 14:05:17.751545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.029 [2024-07-15 14:05:17.751563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.029 [2024-07-15 14:05:17.751575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.029 [2024-07-15 14:05:17.754476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.029 [2024-07-15 14:05:17.764017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.029 [2024-07-15 14:05:17.764375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.029 [2024-07-15 14:05:17.764399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.029 [2024-07-15 14:05:17.764413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.029 [2024-07-15 14:05:17.764601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.029 [2024-07-15 14:05:17.764834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.029 [2024-07-15 14:05:17.764854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.029 [2024-07-15 14:05:17.764867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.029 [2024-07-15 14:05:17.767807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.029 [2024-07-15 14:05:17.777612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.029 [2024-07-15 14:05:17.777991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.029 [2024-07-15 14:05:17.778033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.029 [2024-07-15 14:05:17.778047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.029 [2024-07-15 14:05:17.778251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.029 [2024-07-15 14:05:17.778444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.029 [2024-07-15 14:05:17.778462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.030 [2024-07-15 14:05:17.778474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.030 [2024-07-15 14:05:17.781424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.030 [2024-07-15 14:05:17.790910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.030 [2024-07-15 14:05:17.791280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.030 [2024-07-15 14:05:17.791304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.030 [2024-07-15 14:05:17.791318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.030 [2024-07-15 14:05:17.791506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.030 [2024-07-15 14:05:17.791699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.030 [2024-07-15 14:05:17.791717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.030 [2024-07-15 14:05:17.791754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.030 [2024-07-15 14:05:17.794762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.030 [2024-07-15 14:05:17.804214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.030 [2024-07-15 14:05:17.804579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.030 [2024-07-15 14:05:17.804616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.030 [2024-07-15 14:05:17.804630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.030 [2024-07-15 14:05:17.804871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.030 [2024-07-15 14:05:17.805101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.030 [2024-07-15 14:05:17.805120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.030 [2024-07-15 14:05:17.805148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.030 [2024-07-15 14:05:17.808173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.030 [2024-07-15 14:05:17.817546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.030 [2024-07-15 14:05:17.817982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.030 [2024-07-15 14:05:17.818008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.030 [2024-07-15 14:05:17.818027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.030 [2024-07-15 14:05:17.818248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.030 [2024-07-15 14:05:17.818439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.030 [2024-07-15 14:05:17.818457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.030 [2024-07-15 14:05:17.818469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.030 [2024-07-15 14:05:17.821465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.030 [2024-07-15 14:05:17.830692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.030 [2024-07-15 14:05:17.831068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.030 [2024-07-15 14:05:17.831107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.030 [2024-07-15 14:05:17.831120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.030 [2024-07-15 14:05:17.831322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.030 [2024-07-15 14:05:17.831513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.030 [2024-07-15 14:05:17.831532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.030 [2024-07-15 14:05:17.831544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.030 [2024-07-15 14:05:17.834351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.030 [2024-07-15 14:05:17.843647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.030 [2024-07-15 14:05:17.844019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.030 [2024-07-15 14:05:17.844059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.030 [2024-07-15 14:05:17.844073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.030 [2024-07-15 14:05:17.844261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.030 [2024-07-15 14:05:17.844453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.030 [2024-07-15 14:05:17.844471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.030 [2024-07-15 14:05:17.844483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.030 [2024-07-15 14:05:17.847400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.030 [2024-07-15 14:05:17.856841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.030 [2024-07-15 14:05:17.857201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.030 [2024-07-15 14:05:17.857225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.030 [2024-07-15 14:05:17.857239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.030 [2024-07-15 14:05:17.857427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.030 [2024-07-15 14:05:17.857619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.030 [2024-07-15 14:05:17.857644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.030 [2024-07-15 14:05:17.857657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.030 [2024-07-15 14:05:17.860467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.291 [2024-07-15 14:05:17.870201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.291 [2024-07-15 14:05:17.870544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.291 [2024-07-15 14:05:17.870590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.291 [2024-07-15 14:05:17.870604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.291 [2024-07-15 14:05:17.870837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.291 [2024-07-15 14:05:17.871055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.291 [2024-07-15 14:05:17.871074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.291 [2024-07-15 14:05:17.871087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.291 [2024-07-15 14:05:17.874226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.291 [2024-07-15 14:05:17.883455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.291 [2024-07-15 14:05:17.883837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.291 [2024-07-15 14:05:17.883880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.291 [2024-07-15 14:05:17.883895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.291 [2024-07-15 14:05:17.884142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.291 [2024-07-15 14:05:17.884334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.291 [2024-07-15 14:05:17.884352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.291 [2024-07-15 14:05:17.884364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.291 [2024-07-15 14:05:17.887317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.291 [2024-07-15 14:05:17.896677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.291 [2024-07-15 14:05:17.897120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.291 [2024-07-15 14:05:17.897158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.291 [2024-07-15 14:05:17.897172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.291 [2024-07-15 14:05:17.897373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.291 [2024-07-15 14:05:17.897565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.291 [2024-07-15 14:05:17.897583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.291 [2024-07-15 14:05:17.897596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.291 [2024-07-15 14:05:17.900551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.291 [2024-07-15 14:05:17.909859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.291 [2024-07-15 14:05:17.910272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.291 [2024-07-15 14:05:17.910297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.291 [2024-07-15 14:05:17.910310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.291 [2024-07-15 14:05:17.910513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.291 [2024-07-15 14:05:17.910704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.291 [2024-07-15 14:05:17.910722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.291 [2024-07-15 14:05:17.910734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.291 [2024-07-15 14:05:17.913650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.291 [2024-07-15 14:05:17.922940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.291 [2024-07-15 14:05:17.923298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.291 [2024-07-15 14:05:17.923335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.291 [2024-07-15 14:05:17.923349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.291 [2024-07-15 14:05:17.923550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.291 [2024-07-15 14:05:17.923751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.291 [2024-07-15 14:05:17.923785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.291 [2024-07-15 14:05:17.923797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.291 [2024-07-15 14:05:17.926583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.291 [2024-07-15 14:05:17.936040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.291 [2024-07-15 14:05:17.936391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.291 [2024-07-15 14:05:17.936415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.291 [2024-07-15 14:05:17.936429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.291 [2024-07-15 14:05:17.936617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.292 [2024-07-15 14:05:17.936836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.292 [2024-07-15 14:05:17.936856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.292 [2024-07-15 14:05:17.936868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.292 [2024-07-15 14:05:17.939653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.292 [2024-07-15 14:05:17.949179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.292 [2024-07-15 14:05:17.949535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.292 [2024-07-15 14:05:17.949559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.292 [2024-07-15 14:05:17.949573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.292 [2024-07-15 14:05:17.949807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.292 [2024-07-15 14:05:17.950026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.292 [2024-07-15 14:05:17.950046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.292 [2024-07-15 14:05:17.950058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.292 [2024-07-15 14:05:17.952943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.292 [2024-07-15 14:05:17.962289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.292 [2024-07-15 14:05:17.962622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.292 [2024-07-15 14:05:17.962668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.292 [2024-07-15 14:05:17.962682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.292 [2024-07-15 14:05:17.962902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.292 [2024-07-15 14:05:17.963118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.292 [2024-07-15 14:05:17.963137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.292 [2024-07-15 14:05:17.963149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.292 [2024-07-15 14:05:17.966021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.292 [2024-07-15 14:05:17.975457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.292 [2024-07-15 14:05:17.975855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.292 [2024-07-15 14:05:17.975880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.292 [2024-07-15 14:05:17.975894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.292 [2024-07-15 14:05:17.976121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.292 [2024-07-15 14:05:17.976319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.292 [2024-07-15 14:05:17.976338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.292 [2024-07-15 14:05:17.976351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.292 [2024-07-15 14:05:17.979350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.292 [2024-07-15 14:05:17.988834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.292 [2024-07-15 14:05:17.989197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.292 [2024-07-15 14:05:17.989237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.292 [2024-07-15 14:05:17.989251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.292 [2024-07-15 14:05:17.989459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.292 [2024-07-15 14:05:17.989657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.292 [2024-07-15 14:05:17.989676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.292 [2024-07-15 14:05:17.989694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.292 [2024-07-15 14:05:17.992709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.292 [2024-07-15 14:05:18.002062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.292 [2024-07-15 14:05:18.002434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.292 [2024-07-15 14:05:18.002473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.292 [2024-07-15 14:05:18.002487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.292 [2024-07-15 14:05:18.002695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.292 [2024-07-15 14:05:18.002927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.292 [2024-07-15 14:05:18.002948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.292 [2024-07-15 14:05:18.002961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.292 [2024-07-15 14:05:18.005956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.292 [2024-07-15 14:05:18.015280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.292 [2024-07-15 14:05:18.015638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.292 [2024-07-15 14:05:18.015663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.292 [2024-07-15 14:05:18.015692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.292 [2024-07-15 14:05:18.015928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.292 [2024-07-15 14:05:18.016147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.292 [2024-07-15 14:05:18.016166] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.292 [2024-07-15 14:05:18.016178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.292 [2024-07-15 14:05:18.019181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.292 [2024-07-15 14:05:18.028957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.292 [2024-07-15 14:05:18.029369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.292 [2024-07-15 14:05:18.029412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.292 [2024-07-15 14:05:18.029426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.292 [2024-07-15 14:05:18.029634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.292 [2024-07-15 14:05:18.029865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.292 [2024-07-15 14:05:18.029886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.292 [2024-07-15 14:05:18.029899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.292 [2024-07-15 14:05:18.032988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.292 [2024-07-15 14:05:18.042394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.292 [2024-07-15 14:05:18.042835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.292 [2024-07-15 14:05:18.042882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.292 [2024-07-15 14:05:18.042898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.292 [2024-07-15 14:05:18.043137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.292 [2024-07-15 14:05:18.043335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.292 [2024-07-15 14:05:18.043353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.292 [2024-07-15 14:05:18.043366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.292 [2024-07-15 14:05:18.046401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.292 [2024-07-15 14:05:18.055800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.292 [2024-07-15 14:05:18.056214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.292 [2024-07-15 14:05:18.056258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.292 [2024-07-15 14:05:18.056272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.292 [2024-07-15 14:05:18.056474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.292 [2024-07-15 14:05:18.056666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.292 [2024-07-15 14:05:18.056685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.292 [2024-07-15 14:05:18.056697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.292 [2024-07-15 14:05:18.059689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.292 [2024-07-15 14:05:18.069210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.292 [2024-07-15 14:05:18.069616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.292 [2024-07-15 14:05:18.069639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.292 [2024-07-15 14:05:18.069653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.292 [2024-07-15 14:05:18.069907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.292 [2024-07-15 14:05:18.070141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.292 [2024-07-15 14:05:18.070160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.292 [2024-07-15 14:05:18.070172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.292 [2024-07-15 14:05:18.073203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.292 [2024-07-15 14:05:18.082568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.292 [2024-07-15 14:05:18.082990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.292 [2024-07-15 14:05:18.083038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.292 [2024-07-15 14:05:18.083052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.293 [2024-07-15 14:05:18.083260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.293 [2024-07-15 14:05:18.083462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.293 [2024-07-15 14:05:18.083481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.293 [2024-07-15 14:05:18.083494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.293 [2024-07-15 14:05:18.086422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.293 [2024-07-15 14:05:18.095903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.293 [2024-07-15 14:05:18.096380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.293 [2024-07-15 14:05:18.096426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.293 [2024-07-15 14:05:18.096440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.293 [2024-07-15 14:05:18.096648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.293 [2024-07-15 14:05:18.096874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.293 [2024-07-15 14:05:18.096894] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.293 [2024-07-15 14:05:18.096907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.293 [2024-07-15 14:05:18.099924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.293 [2024-07-15 14:05:18.109115] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.293 [2024-07-15 14:05:18.109559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.293 [2024-07-15 14:05:18.109597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.293 [2024-07-15 14:05:18.109612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.293 [2024-07-15 14:05:18.109852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.293 [2024-07-15 14:05:18.110078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.293 [2024-07-15 14:05:18.110112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.293 [2024-07-15 14:05:18.110125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.293 [2024-07-15 14:05:18.113113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.293 [2024-07-15 14:05:18.122433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.293 [2024-07-15 14:05:18.122817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.293 [2024-07-15 14:05:18.122855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.293 [2024-07-15 14:05:18.122870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.293 [2024-07-15 14:05:18.123090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.293 [2024-07-15 14:05:18.123282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.293 [2024-07-15 14:05:18.123301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.293 [2024-07-15 14:05:18.123312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.293 [2024-07-15 14:05:18.126327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.552 [2024-07-15 14:05:18.135713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.552 [2024-07-15 14:05:18.136192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.552 [2024-07-15 14:05:18.136229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.552 [2024-07-15 14:05:18.136243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.552 [2024-07-15 14:05:18.136446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.552 [2024-07-15 14:05:18.136638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.552 [2024-07-15 14:05:18.136656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.552 [2024-07-15 14:05:18.136668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.552 [2024-07-15 14:05:18.139593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.552 [2024-07-15 14:05:18.148789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.552 [2024-07-15 14:05:18.149240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.552 [2024-07-15 14:05:18.149284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.552 [2024-07-15 14:05:18.149298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.552 [2024-07-15 14:05:18.149501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.552 [2024-07-15 14:05:18.149693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.552 [2024-07-15 14:05:18.149711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.552 [2024-07-15 14:05:18.149723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.552 [2024-07-15 14:05:18.152660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.552 [2024-07-15 14:05:18.161904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.552 [2024-07-15 14:05:18.162351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.552 [2024-07-15 14:05:18.162389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.552 [2024-07-15 14:05:18.162403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.552 [2024-07-15 14:05:18.162591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.552 [2024-07-15 14:05:18.162809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.552 [2024-07-15 14:05:18.162829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.552 [2024-07-15 14:05:18.162842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.552 [2024-07-15 14:05:18.165666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.552 [2024-07-15 14:05:18.175045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.552 [2024-07-15 14:05:18.175462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.552 [2024-07-15 14:05:18.175506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.552 [2024-07-15 14:05:18.175525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.552 [2024-07-15 14:05:18.175751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.552 [2024-07-15 14:05:18.175969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.552 [2024-07-15 14:05:18.175989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.552 [2024-07-15 14:05:18.176002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.552 [2024-07-15 14:05:18.178911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.552 [2024-07-15 14:05:18.188087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.552 [2024-07-15 14:05:18.188534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.552 [2024-07-15 14:05:18.188578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.552 [2024-07-15 14:05:18.188592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.552 [2024-07-15 14:05:18.188823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.552 [2024-07-15 14:05:18.189027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.552 [2024-07-15 14:05:18.189046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.552 [2024-07-15 14:05:18.189076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.552 [2024-07-15 14:05:18.191963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.552 [2024-07-15 14:05:18.201236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.552 [2024-07-15 14:05:18.201682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.552 [2024-07-15 14:05:18.201727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.552 [2024-07-15 14:05:18.201749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.552 [2024-07-15 14:05:18.201958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.552 [2024-07-15 14:05:18.202168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.552 [2024-07-15 14:05:18.202187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.552 [2024-07-15 14:05:18.202199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.552 [2024-07-15 14:05:18.205004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.552 [2024-07-15 14:05:18.214333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.552 [2024-07-15 14:05:18.214783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.552 [2024-07-15 14:05:18.214807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.552 [2024-07-15 14:05:18.214835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.552 [2024-07-15 14:05:18.215023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.552 [2024-07-15 14:05:18.215214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.552 [2024-07-15 14:05:18.215237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.552 [2024-07-15 14:05:18.215250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.552 [2024-07-15 14:05:18.218165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.552 [2024-07-15 14:05:18.227466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.552 [2024-07-15 14:05:18.227921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.552 [2024-07-15 14:05:18.227959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.552 [2024-07-15 14:05:18.227973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.552 [2024-07-15 14:05:18.228161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.552 [2024-07-15 14:05:18.228353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.552 [2024-07-15 14:05:18.228371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.552 [2024-07-15 14:05:18.228383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.552 [2024-07-15 14:05:18.231282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.552 [2024-07-15 14:05:18.240563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.552 [2024-07-15 14:05:18.241024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.552 [2024-07-15 14:05:18.241062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.552 [2024-07-15 14:05:18.241077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.552 [2024-07-15 14:05:18.241265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.553 [2024-07-15 14:05:18.241457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.553 [2024-07-15 14:05:18.241475] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.553 [2024-07-15 14:05:18.241487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.553 [2024-07-15 14:05:18.244405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.553 [2024-07-15 14:05:18.253682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.553 [2024-07-15 14:05:18.254154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.553 [2024-07-15 14:05:18.254178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.553 [2024-07-15 14:05:18.254206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.553 [2024-07-15 14:05:18.254409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.553 [2024-07-15 14:05:18.254607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.553 [2024-07-15 14:05:18.254626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.553 [2024-07-15 14:05:18.254639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.553 [2024-07-15 14:05:18.257552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.553 [2024-07-15 14:05:18.266770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.553 [2024-07-15 14:05:18.267214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.553 [2024-07-15 14:05:18.267261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.553 [2024-07-15 14:05:18.267275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.553 [2024-07-15 14:05:18.267477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.553 [2024-07-15 14:05:18.267669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.553 [2024-07-15 14:05:18.267688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.553 [2024-07-15 14:05:18.267700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.553 [2024-07-15 14:05:18.270619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.553 [2024-07-15 14:05:18.280217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.553 [2024-07-15 14:05:18.280662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.553 [2024-07-15 14:05:18.280707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.553 [2024-07-15 14:05:18.280721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.553 [2024-07-15 14:05:18.280947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.553 [2024-07-15 14:05:18.281165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.553 [2024-07-15 14:05:18.281184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.553 [2024-07-15 14:05:18.281196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.553 [2024-07-15 14:05:18.284199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.553 [2024-07-15 14:05:18.293420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.553 [2024-07-15 14:05:18.293878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.553 [2024-07-15 14:05:18.293902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.553 [2024-07-15 14:05:18.293930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.553 [2024-07-15 14:05:18.294136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.553 [2024-07-15 14:05:18.294327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.553 [2024-07-15 14:05:18.294346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.553 [2024-07-15 14:05:18.294358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.553 [2024-07-15 14:05:18.297257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.553 [2024-07-15 14:05:18.306567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.553 [2024-07-15 14:05:18.306981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.553 [2024-07-15 14:05:18.307028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.553 [2024-07-15 14:05:18.307046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.553 [2024-07-15 14:05:18.307249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.553 [2024-07-15 14:05:18.307462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.553 [2024-07-15 14:05:18.307481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.553 [2024-07-15 14:05:18.307493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.553 [2024-07-15 14:05:18.310642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.553 [2024-07-15 14:05:18.320176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.553 [2024-07-15 14:05:18.320584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.553 [2024-07-15 14:05:18.320630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.553 [2024-07-15 14:05:18.320644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.553 [2024-07-15 14:05:18.320895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.553 [2024-07-15 14:05:18.321141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.553 [2024-07-15 14:05:18.321160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.553 [2024-07-15 14:05:18.321173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.553 [2024-07-15 14:05:18.324390] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.553 [2024-07-15 14:05:18.333694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.553 [2024-07-15 14:05:18.334151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.553 [2024-07-15 14:05:18.334196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.553 [2024-07-15 14:05:18.334211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.553 [2024-07-15 14:05:18.334451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.553 [2024-07-15 14:05:18.334677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.553 [2024-07-15 14:05:18.334697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.553 [2024-07-15 14:05:18.334710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.553 [2024-07-15 14:05:18.337944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.553 [2024-07-15 14:05:18.347384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.553 [2024-07-15 14:05:18.347814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.553 [2024-07-15 14:05:18.347843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.553 [2024-07-15 14:05:18.347859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.553 [2024-07-15 14:05:18.348105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.553 [2024-07-15 14:05:18.348297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.553 [2024-07-15 14:05:18.348320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.553 [2024-07-15 14:05:18.348333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.553 [2024-07-15 14:05:18.351400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.553 [2024-07-15 14:05:18.360711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.553 [2024-07-15 14:05:18.361088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.553 [2024-07-15 14:05:18.361112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.553 [2024-07-15 14:05:18.361140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.553 [2024-07-15 14:05:18.361342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.553 [2024-07-15 14:05:18.361534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.553 [2024-07-15 14:05:18.361552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.553 [2024-07-15 14:05:18.361564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.553 [2024-07-15 14:05:18.364580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.553 [2024-07-15 14:05:18.373920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.553 [2024-07-15 14:05:18.374297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.553 [2024-07-15 14:05:18.374344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.553 [2024-07-15 14:05:18.374357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.553 [2024-07-15 14:05:18.374559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.553 [2024-07-15 14:05:18.374779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.553 [2024-07-15 14:05:18.374800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.553 [2024-07-15 14:05:18.374813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.553 [2024-07-15 14:05:18.377700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.553 [2024-07-15 14:05:18.387163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.553 [2024-07-15 14:05:18.387525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.553 [2024-07-15 14:05:18.387561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.553 [2024-07-15 14:05:18.387591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.553 [2024-07-15 14:05:18.387805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.554 [2024-07-15 14:05:18.388022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.554 [2024-07-15 14:05:18.388043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.554 [2024-07-15 14:05:18.388070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.554 [2024-07-15 14:05:18.391218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.825 [2024-07-15 14:05:18.400845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.825 [2024-07-15 14:05:18.401322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.825 [2024-07-15 14:05:18.401374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.825 [2024-07-15 14:05:18.401390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.825 [2024-07-15 14:05:18.401618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.825 [2024-07-15 14:05:18.401870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.825 [2024-07-15 14:05:18.401893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.825 [2024-07-15 14:05:18.401907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.825 [2024-07-15 14:05:18.405279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.825 [2024-07-15 14:05:18.414527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.825 [2024-07-15 14:05:18.414905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.825 [2024-07-15 14:05:18.414954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.825 [2024-07-15 14:05:18.414970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.825 [2024-07-15 14:05:18.415229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.825 [2024-07-15 14:05:18.415439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.825 [2024-07-15 14:05:18.415460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.825 [2024-07-15 14:05:18.415473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.825 [2024-07-15 14:05:18.418939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.825 [2024-07-15 14:05:18.428216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.825 [2024-07-15 14:05:18.428671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.825 [2024-07-15 14:05:18.428696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.825 [2024-07-15 14:05:18.428725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.825 [2024-07-15 14:05:18.428960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.825 [2024-07-15 14:05:18.429187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.825 [2024-07-15 14:05:18.429221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.825 [2024-07-15 14:05:18.429234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.825 [2024-07-15 14:05:18.432554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.825 [2024-07-15 14:05:18.441514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.825 [2024-07-15 14:05:18.441952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.825 [2024-07-15 14:05:18.441980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.825 [2024-07-15 14:05:18.441996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.826 [2024-07-15 14:05:18.442208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.826 [2024-07-15 14:05:18.442401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.826 [2024-07-15 14:05:18.442419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.826 [2024-07-15 14:05:18.442431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.826 [2024-07-15 14:05:18.445407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.826 [2024-07-15 14:05:18.454852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.826 [2024-07-15 14:05:18.455305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.826 [2024-07-15 14:05:18.455354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.826 [2024-07-15 14:05:18.455368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.826 [2024-07-15 14:05:18.455570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.826 [2024-07-15 14:05:18.455793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.826 [2024-07-15 14:05:18.455815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.826 [2024-07-15 14:05:18.455829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.826 [2024-07-15 14:05:18.458855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.826 [2024-07-15 14:05:18.468083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.826 [2024-07-15 14:05:18.468538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.826 [2024-07-15 14:05:18.468561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.826 [2024-07-15 14:05:18.468590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.826 [2024-07-15 14:05:18.468829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.826 [2024-07-15 14:05:18.469062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.826 [2024-07-15 14:05:18.469082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.826 [2024-07-15 14:05:18.469108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.826 [2024-07-15 14:05:18.472103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.826 [2024-07-15 14:05:18.481285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.826 [2024-07-15 14:05:18.481708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.826 [2024-07-15 14:05:18.481767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.826 [2024-07-15 14:05:18.481782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.826 [2024-07-15 14:05:18.481989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.826 [2024-07-15 14:05:18.482198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.826 [2024-07-15 14:05:18.482217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.826 [2024-07-15 14:05:18.482234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.826 [2024-07-15 14:05:18.485081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.826 [2024-07-15 14:05:18.494385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.826 [2024-07-15 14:05:18.494833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.826 [2024-07-15 14:05:18.494858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.826 [2024-07-15 14:05:18.494885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.826 [2024-07-15 14:05:18.495073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.826 [2024-07-15 14:05:18.495265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.826 [2024-07-15 14:05:18.495283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.826 [2024-07-15 14:05:18.495296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.826 [2024-07-15 14:05:18.498205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.826 [2024-07-15 14:05:18.507451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.826 [2024-07-15 14:05:18.507852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.826 [2024-07-15 14:05:18.507875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.826 [2024-07-15 14:05:18.507889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.826 [2024-07-15 14:05:18.508091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.826 [2024-07-15 14:05:18.508283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.826 [2024-07-15 14:05:18.508301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.826 [2024-07-15 14:05:18.508313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.826 [2024-07-15 14:05:18.511243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.826 [2024-07-15 14:05:18.520483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.826 [2024-07-15 14:05:18.520914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.826 [2024-07-15 14:05:18.520952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.826 [2024-07-15 14:05:18.520966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.826 [2024-07-15 14:05:18.521191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.826 [2024-07-15 14:05:18.521395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.826 [2024-07-15 14:05:18.521414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.826 [2024-07-15 14:05:18.521427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.826 [2024-07-15 14:05:18.524787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.826 [2024-07-15 14:05:18.533770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.826 [2024-07-15 14:05:18.534275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.826 [2024-07-15 14:05:18.534319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.826 [2024-07-15 14:05:18.534334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.826 [2024-07-15 14:05:18.534524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.826 [2024-07-15 14:05:18.534715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.826 [2024-07-15 14:05:18.534757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.826 [2024-07-15 14:05:18.534771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.826 [2024-07-15 14:05:18.537767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.826 [2024-07-15 14:05:18.546981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.826 [2024-07-15 14:05:18.547428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.826 [2024-07-15 14:05:18.547452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.826 [2024-07-15 14:05:18.547481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.826 [2024-07-15 14:05:18.547669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.826 [2024-07-15 14:05:18.547910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.826 [2024-07-15 14:05:18.547930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.826 [2024-07-15 14:05:18.547943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.826 [2024-07-15 14:05:18.550790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.826 [2024-07-15 14:05:18.559967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.826 [2024-07-15 14:05:18.560386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.826 [2024-07-15 14:05:18.560410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.826 [2024-07-15 14:05:18.560438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.826 [2024-07-15 14:05:18.560626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.826 [2024-07-15 14:05:18.560847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.826 [2024-07-15 14:05:18.560867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.826 [2024-07-15 14:05:18.560880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.826 [2024-07-15 14:05:18.563786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.826 [2024-07-15 14:05:18.573011] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.826 [2024-07-15 14:05:18.573417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.826 [2024-07-15 14:05:18.573443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.826 [2024-07-15 14:05:18.573469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.826 [2024-07-15 14:05:18.573658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.826 [2024-07-15 14:05:18.573888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.826 [2024-07-15 14:05:18.573909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.826 [2024-07-15 14:05:18.573922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.826 [2024-07-15 14:05:18.576822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.826 [2024-07-15 14:05:18.586047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.826 [2024-07-15 14:05:18.586496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.826 [2024-07-15 14:05:18.586534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.827 [2024-07-15 14:05:18.586548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.827 [2024-07-15 14:05:18.586746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.827 [2024-07-15 14:05:18.586979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.827 [2024-07-15 14:05:18.586999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.827 [2024-07-15 14:05:18.587012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.827 [2024-07-15 14:05:18.589899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.827 [2024-07-15 14:05:18.599125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.827 [2024-07-15 14:05:18.599590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.827 [2024-07-15 14:05:18.599635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.827 [2024-07-15 14:05:18.599650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.827 [2024-07-15 14:05:18.599882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.827 [2024-07-15 14:05:18.600086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.827 [2024-07-15 14:05:18.600121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.827 [2024-07-15 14:05:18.600133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.827 [2024-07-15 14:05:18.603019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.827 [2024-07-15 14:05:18.612238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.827 [2024-07-15 14:05:18.612703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.827 [2024-07-15 14:05:18.612755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.827 [2024-07-15 14:05:18.612769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.827 [2024-07-15 14:05:18.612971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.827 [2024-07-15 14:05:18.613163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.827 [2024-07-15 14:05:18.613181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.827 [2024-07-15 14:05:18.613193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.827 [2024-07-15 14:05:18.616008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.827 [2024-07-15 14:05:18.625308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.827 [2024-07-15 14:05:18.625771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.827 [2024-07-15 14:05:18.625811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.827 [2024-07-15 14:05:18.625824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.827 [2024-07-15 14:05:18.626026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.827 [2024-07-15 14:05:18.626218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.827 [2024-07-15 14:05:18.626236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.827 [2024-07-15 14:05:18.626248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.827 [2024-07-15 14:05:18.629057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.827 [2024-07-15 14:05:18.638581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.827 [2024-07-15 14:05:18.639098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.827 [2024-07-15 14:05:18.639147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.827 [2024-07-15 14:05:18.639161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.827 [2024-07-15 14:05:18.639363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.827 [2024-07-15 14:05:18.639555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.827 [2024-07-15 14:05:18.639573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.827 [2024-07-15 14:05:18.639585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.827 [2024-07-15 14:05:18.642507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.827 [2024-07-15 14:05:18.651965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.827 [2024-07-15 14:05:18.652342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.827 [2024-07-15 14:05:18.652367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:23.827 [2024-07-15 14:05:18.652381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:23.827 [2024-07-15 14:05:18.652575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:23.827 [2024-07-15 14:05:18.652802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.827 [2024-07-15 14:05:18.652823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.827 [2024-07-15 14:05:18.652835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.827 [2024-07-15 14:05:18.656107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.097 [2024-07-15 14:05:18.665230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.097 [2024-07-15 14:05:18.665653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.097 [2024-07-15 14:05:18.665705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.097 [2024-07-15 14:05:18.665723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.097 [2024-07-15 14:05:18.665939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.097 [2024-07-15 14:05:18.666149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.097 [2024-07-15 14:05:18.666168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.097 [2024-07-15 14:05:18.666180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.097 [2024-07-15 14:05:18.669074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.097 [2024-07-15 14:05:18.678796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.097 [2024-07-15 14:05:18.679241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.097 [2024-07-15 14:05:18.679290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.097 [2024-07-15 14:05:18.679304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.097 [2024-07-15 14:05:18.679526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.097 [2024-07-15 14:05:18.679747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.097 [2024-07-15 14:05:18.679790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.097 [2024-07-15 14:05:18.679804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.097 [2024-07-15 14:05:18.682992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.097 [2024-07-15 14:05:18.691972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.097 [2024-07-15 14:05:18.692399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.097 [2024-07-15 14:05:18.692422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.097 [2024-07-15 14:05:18.692450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.097 [2024-07-15 14:05:18.692639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.097 [2024-07-15 14:05:18.692859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.097 [2024-07-15 14:05:18.692879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.097 [2024-07-15 14:05:18.692892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.097 [2024-07-15 14:05:18.695817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.097 [2024-07-15 14:05:18.705146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.097 [2024-07-15 14:05:18.705604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.097 [2024-07-15 14:05:18.705642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.097 [2024-07-15 14:05:18.705656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.097 [2024-07-15 14:05:18.705877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.097 [2024-07-15 14:05:18.706095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.097 [2024-07-15 14:05:18.706118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.097 [2024-07-15 14:05:18.706130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.097 [2024-07-15 14:05:18.709002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.097 [2024-07-15 14:05:18.718301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.097 [2024-07-15 14:05:18.718714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.097 [2024-07-15 14:05:18.718745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.097 [2024-07-15 14:05:18.718784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.097 [2024-07-15 14:05:18.718997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.097 [2024-07-15 14:05:18.719227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.097 [2024-07-15 14:05:18.719245] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.097 [2024-07-15 14:05:18.719257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.097 [2024-07-15 14:05:18.722145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.097 [2024-07-15 14:05:18.731353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.097 [2024-07-15 14:05:18.731796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.097 [2024-07-15 14:05:18.731820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.097 [2024-07-15 14:05:18.731849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.097 [2024-07-15 14:05:18.732037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.097 [2024-07-15 14:05:18.732228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.097 [2024-07-15 14:05:18.732247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.097 [2024-07-15 14:05:18.732259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.097 [2024-07-15 14:05:18.735173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.097 [2024-07-15 14:05:18.744488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.097 [2024-07-15 14:05:18.744941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.097 [2024-07-15 14:05:18.744979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.097 [2024-07-15 14:05:18.744993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.097 [2024-07-15 14:05:18.745182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.097 [2024-07-15 14:05:18.745374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.097 [2024-07-15 14:05:18.745392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.097 [2024-07-15 14:05:18.745404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.097 [2024-07-15 14:05:18.748317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.097 [2024-07-15 14:05:18.757679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.097 [2024-07-15 14:05:18.758162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.097 [2024-07-15 14:05:18.758212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.097 [2024-07-15 14:05:18.758227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.097 [2024-07-15 14:05:18.758415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.097 [2024-07-15 14:05:18.758607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.097 [2024-07-15 14:05:18.758626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.097 [2024-07-15 14:05:18.758638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.097 [2024-07-15 14:05:18.761610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.098 [2024-07-15 14:05:18.770798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.098 [2024-07-15 14:05:18.771285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.098 [2024-07-15 14:05:18.771335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.098 [2024-07-15 14:05:18.771349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.098 [2024-07-15 14:05:18.771587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.098 [2024-07-15 14:05:18.771835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.098 [2024-07-15 14:05:18.771857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.098 [2024-07-15 14:05:18.771871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.098 [2024-07-15 14:05:18.775319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.098 [2024-07-15 14:05:18.784060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.098 [2024-07-15 14:05:18.784525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.098 [2024-07-15 14:05:18.784576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.098 [2024-07-15 14:05:18.784590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.098 [2024-07-15 14:05:18.784838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.098 [2024-07-15 14:05:18.785064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.098 [2024-07-15 14:05:18.785083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.098 [2024-07-15 14:05:18.785110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.098 [2024-07-15 14:05:18.788036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.098 [2024-07-15 14:05:18.797496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.098 [2024-07-15 14:05:18.797935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.098 [2024-07-15 14:05:18.797959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.098 [2024-07-15 14:05:18.797989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.098 [2024-07-15 14:05:18.798217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.098 [2024-07-15 14:05:18.798436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.098 [2024-07-15 14:05:18.798456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.098 [2024-07-15 14:05:18.798469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.098 [2024-07-15 14:05:18.801451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.098 [2024-07-15 14:05:18.810793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.098 [2024-07-15 14:05:18.811252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.098 [2024-07-15 14:05:18.811290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.098 [2024-07-15 14:05:18.811304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.098 [2024-07-15 14:05:18.811495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.098 [2024-07-15 14:05:18.811687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.098 [2024-07-15 14:05:18.811705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.098 [2024-07-15 14:05:18.811733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.098 [2024-07-15 14:05:18.814744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.098 [2024-07-15 14:05:18.824052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.098 [2024-07-15 14:05:18.824465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.098 [2024-07-15 14:05:18.824489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.098 [2024-07-15 14:05:18.824516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.098 [2024-07-15 14:05:18.824704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.098 [2024-07-15 14:05:18.824924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.098 [2024-07-15 14:05:18.824944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.098 [2024-07-15 14:05:18.824956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.098 [2024-07-15 14:05:18.827875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.098 [2024-07-15 14:05:18.837142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.098 [2024-07-15 14:05:18.837553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.098 [2024-07-15 14:05:18.837577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.098 [2024-07-15 14:05:18.837605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.098 [2024-07-15 14:05:18.837823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.098 [2024-07-15 14:05:18.838028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.098 [2024-07-15 14:05:18.838047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.098 [2024-07-15 14:05:18.838080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.098 [2024-07-15 14:05:18.840966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.098 [2024-07-15 14:05:18.850226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.098 [2024-07-15 14:05:18.850655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.098 [2024-07-15 14:05:18.850679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.098 [2024-07-15 14:05:18.850707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.098 [2024-07-15 14:05:18.850942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.098 [2024-07-15 14:05:18.851160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.098 [2024-07-15 14:05:18.851179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.098 [2024-07-15 14:05:18.851191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.098 [2024-07-15 14:05:18.854078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.098 [2024-07-15 14:05:18.863319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.098 [2024-07-15 14:05:18.863728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.098 [2024-07-15 14:05:18.863758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.098 [2024-07-15 14:05:18.863788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.098 [2024-07-15 14:05:18.863982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.098 [2024-07-15 14:05:18.864191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.098 [2024-07-15 14:05:18.864209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.098 [2024-07-15 14:05:18.864222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.098 [2024-07-15 14:05:18.867115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.098 [2024-07-15 14:05:18.876365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.098 [2024-07-15 14:05:18.876801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.098 [2024-07-15 14:05:18.876839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.098 [2024-07-15 14:05:18.876854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.098 [2024-07-15 14:05:18.877042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.098 [2024-07-15 14:05:18.877233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.098 [2024-07-15 14:05:18.877252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.098 [2024-07-15 14:05:18.877264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.098 [2024-07-15 14:05:18.880074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.098 [2024-07-15 14:05:18.889370] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.098 [2024-07-15 14:05:18.889808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.098 [2024-07-15 14:05:18.889832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.098 [2024-07-15 14:05:18.889860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.098 [2024-07-15 14:05:18.890048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.098 [2024-07-15 14:05:18.890240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.098 [2024-07-15 14:05:18.890258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.098 [2024-07-15 14:05:18.890270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.098 [2024-07-15 14:05:18.893166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.098 [2024-07-15 14:05:18.902418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.098 [2024-07-15 14:05:18.902851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.098 [2024-07-15 14:05:18.902875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.098 [2024-07-15 14:05:18.902904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.098 [2024-07-15 14:05:18.903092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.098 [2024-07-15 14:05:18.903284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.098 [2024-07-15 14:05:18.903302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.098 [2024-07-15 14:05:18.903314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.098 [2024-07-15 14:05:18.906287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.098 [2024-07-15 14:05:18.915565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.098 [2024-07-15 14:05:18.916011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.098 [2024-07-15 14:05:18.916034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.098 [2024-07-15 14:05:18.916063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.098 [2024-07-15 14:05:18.916251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.098 [2024-07-15 14:05:18.916443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.098 [2024-07-15 14:05:18.916461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.098 [2024-07-15 14:05:18.916473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.098 [2024-07-15 14:05:18.919443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.098 [2024-07-15 14:05:18.928685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.098 [2024-07-15 14:05:18.929157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.098 [2024-07-15 14:05:18.929181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.099 [2024-07-15 14:05:18.929194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.099 [2024-07-15 14:05:18.929409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.099 [2024-07-15 14:05:18.929600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.099 [2024-07-15 14:05:18.929619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.099 [2024-07-15 14:05:18.929631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.099 [2024-07-15 14:05:18.932636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.358 [2024-07-15 14:05:18.941993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.358 [2024-07-15 14:05:18.942410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-07-15 14:05:18.942434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.358 [2024-07-15 14:05:18.942462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.358 [2024-07-15 14:05:18.942651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.358 [2024-07-15 14:05:18.942891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.358 [2024-07-15 14:05:18.942911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.358 [2024-07-15 14:05:18.942924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.358 [2024-07-15 14:05:18.945833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.358 [2024-07-15 14:05:18.955132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.358 [2024-07-15 14:05:18.955578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-07-15 14:05:18.955628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.358 [2024-07-15 14:05:18.955642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.358 [2024-07-15 14:05:18.955891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.358 [2024-07-15 14:05:18.956110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.358 [2024-07-15 14:05:18.956129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.358 [2024-07-15 14:05:18.956142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.358 [2024-07-15 14:05:18.959029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.358 [2024-07-15 14:05:18.968294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.358 [2024-07-15 14:05:18.968766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-07-15 14:05:18.968806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.358 [2024-07-15 14:05:18.968819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.358 [2024-07-15 14:05:18.969021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.358 [2024-07-15 14:05:18.969212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.358 [2024-07-15 14:05:18.969230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.358 [2024-07-15 14:05:18.969247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.358 [2024-07-15 14:05:18.972056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.358 [2024-07-15 14:05:18.981347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.358 [2024-07-15 14:05:18.981793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-07-15 14:05:18.981818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.358 [2024-07-15 14:05:18.981846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.358 [2024-07-15 14:05:18.982055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.358 [2024-07-15 14:05:18.982247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.358 [2024-07-15 14:05:18.982265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.358 [2024-07-15 14:05:18.982277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.358 [2024-07-15 14:05:18.985172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.358 [2024-07-15 14:05:18.994420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.358 [2024-07-15 14:05:18.994832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-07-15 14:05:18.994865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.358 [2024-07-15 14:05:18.994893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.358 [2024-07-15 14:05:18.995082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.358 [2024-07-15 14:05:18.995273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.358 [2024-07-15 14:05:18.995291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.358 [2024-07-15 14:05:18.995303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.358 [2024-07-15 14:05:18.998217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.358 [2024-07-15 14:05:19.007585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.358 [2024-07-15 14:05:19.007998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-07-15 14:05:19.008044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.358 [2024-07-15 14:05:19.008059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.358 [2024-07-15 14:05:19.008247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.358 [2024-07-15 14:05:19.008438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.358 [2024-07-15 14:05:19.008457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.358 [2024-07-15 14:05:19.008469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.358 [2024-07-15 14:05:19.011388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.358 [2024-07-15 14:05:19.020647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.358 [2024-07-15 14:05:19.021128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.358 [2024-07-15 14:05:19.021156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.358 [2024-07-15 14:05:19.021184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.358 [2024-07-15 14:05:19.021373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.359 [2024-07-15 14:05:19.021565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.359 [2024-07-15 14:05:19.021583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.359 [2024-07-15 14:05:19.021595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.359 [2024-07-15 14:05:19.024973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.359 [2024-07-15 14:05:19.033966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.359 [2024-07-15 14:05:19.034452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-07-15 14:05:19.034475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.359 [2024-07-15 14:05:19.034503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.359 [2024-07-15 14:05:19.034691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.359 [2024-07-15 14:05:19.034916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.359 [2024-07-15 14:05:19.034936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.359 [2024-07-15 14:05:19.034949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.359 [2024-07-15 14:05:19.037904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.359 [2024-07-15 14:05:19.047220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.359 [2024-07-15 14:05:19.047675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-07-15 14:05:19.047725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.359 [2024-07-15 14:05:19.047754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.359 [2024-07-15 14:05:19.047965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.359 [2024-07-15 14:05:19.048175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.359 [2024-07-15 14:05:19.048193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.359 [2024-07-15 14:05:19.048206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.359 [2024-07-15 14:05:19.050978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.359 [2024-07-15 14:05:19.060193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.359 [2024-07-15 14:05:19.060623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-07-15 14:05:19.060647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.359 [2024-07-15 14:05:19.060675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.359 [2024-07-15 14:05:19.060909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.359 [2024-07-15 14:05:19.061135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.359 [2024-07-15 14:05:19.061154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.359 [2024-07-15 14:05:19.061167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.359 [2024-07-15 14:05:19.064068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.359 [2024-07-15 14:05:19.073288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.359 [2024-07-15 14:05:19.073760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-07-15 14:05:19.073814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.359 [2024-07-15 14:05:19.073827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.359 [2024-07-15 14:05:19.074028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.359 [2024-07-15 14:05:19.074220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.359 [2024-07-15 14:05:19.074239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.359 [2024-07-15 14:05:19.074251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.359 [2024-07-15 14:05:19.077139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.359 [2024-07-15 14:05:19.086490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.359 [2024-07-15 14:05:19.086850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-07-15 14:05:19.086874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.359 [2024-07-15 14:05:19.086902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.359 [2024-07-15 14:05:19.087103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.359 [2024-07-15 14:05:19.087295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.359 [2024-07-15 14:05:19.087315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.359 [2024-07-15 14:05:19.087343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.359 [2024-07-15 14:05:19.090361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.359 [2024-07-15 14:05:19.099827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.359 [2024-07-15 14:05:19.100324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-07-15 14:05:19.100379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.359 [2024-07-15 14:05:19.100393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.359 [2024-07-15 14:05:19.100600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.359 [2024-07-15 14:05:19.100840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.359 [2024-07-15 14:05:19.100861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.359 [2024-07-15 14:05:19.100874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.359 [2024-07-15 14:05:19.103966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.359 [2024-07-15 14:05:19.113142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.359 [2024-07-15 14:05:19.113609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-07-15 14:05:19.113659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.359 [2024-07-15 14:05:19.113672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.359 [2024-07-15 14:05:19.113916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.359 [2024-07-15 14:05:19.114161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.359 [2024-07-15 14:05:19.114180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.359 [2024-07-15 14:05:19.114193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.359 [2024-07-15 14:05:19.117205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.359 [2024-07-15 14:05:19.126359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.359 [2024-07-15 14:05:19.126747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-07-15 14:05:19.126788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.359 [2024-07-15 14:05:19.126804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.359 [2024-07-15 14:05:19.127005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.359 [2024-07-15 14:05:19.127218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.359 [2024-07-15 14:05:19.127237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.359 [2024-07-15 14:05:19.127249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.359 [2024-07-15 14:05:19.130258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.359 [2024-07-15 14:05:19.139567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.359 [2024-07-15 14:05:19.139986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-07-15 14:05:19.140011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.359 [2024-07-15 14:05:19.140025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.359 [2024-07-15 14:05:19.140252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.359 [2024-07-15 14:05:19.140450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.359 [2024-07-15 14:05:19.140469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.359 [2024-07-15 14:05:19.140482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.359 [2024-07-15 14:05:19.143489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.359 [2024-07-15 14:05:19.152953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.359 [2024-07-15 14:05:19.153317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-07-15 14:05:19.153362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.359 [2024-07-15 14:05:19.153381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.359 [2024-07-15 14:05:19.153575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.359 [2024-07-15 14:05:19.153802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.359 [2024-07-15 14:05:19.153823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.359 [2024-07-15 14:05:19.153837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.359 [2024-07-15 14:05:19.156805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.359 [2024-07-15 14:05:19.166147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.359 [2024-07-15 14:05:19.166519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.359 [2024-07-15 14:05:19.166557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.360 [2024-07-15 14:05:19.166571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.360 [2024-07-15 14:05:19.166787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.360 [2024-07-15 14:05:19.166985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.360 [2024-07-15 14:05:19.167004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.360 [2024-07-15 14:05:19.167016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.360 [2024-07-15 14:05:19.170063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.360 [2024-07-15 14:05:19.179411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.360 [2024-07-15 14:05:19.179856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-07-15 14:05:19.179883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.360 [2024-07-15 14:05:19.179913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.360 [2024-07-15 14:05:19.180131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.360 [2024-07-15 14:05:19.180335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.360 [2024-07-15 14:05:19.180355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.360 [2024-07-15 14:05:19.180367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.360 [2024-07-15 14:05:19.183403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.360 [2024-07-15 14:05:19.192755] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.360 [2024-07-15 14:05:19.193136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.360 [2024-07-15 14:05:19.193176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.360 [2024-07-15 14:05:19.193190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.360 [2024-07-15 14:05:19.193421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.360 [2024-07-15 14:05:19.193651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.360 [2024-07-15 14:05:19.193680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.360 [2024-07-15 14:05:19.193694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.360 [2024-07-15 14:05:19.197012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.619 [2024-07-15 14:05:19.206213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.619 [2024-07-15 14:05:19.206658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.619 [2024-07-15 14:05:19.206698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.619 [2024-07-15 14:05:19.206713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.619 [2024-07-15 14:05:19.206942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.619 [2024-07-15 14:05:19.207165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.619 [2024-07-15 14:05:19.207186] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.619 [2024-07-15 14:05:19.207198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.619 [2024-07-15 14:05:19.210299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.619 [2024-07-15 14:05:19.219436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.619 [2024-07-15 14:05:19.219840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.619 [2024-07-15 14:05:19.219865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.619 [2024-07-15 14:05:19.219879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.619 [2024-07-15 14:05:19.220106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.619 [2024-07-15 14:05:19.220304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.619 [2024-07-15 14:05:19.220323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.619 [2024-07-15 14:05:19.220336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.619 [2024-07-15 14:05:19.223305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.619 [2024-07-15 14:05:19.232786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.619 [2024-07-15 14:05:19.233142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.619 [2024-07-15 14:05:19.233168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.619 [2024-07-15 14:05:19.233182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.619 [2024-07-15 14:05:19.233376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.619 [2024-07-15 14:05:19.233574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.619 [2024-07-15 14:05:19.233592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.619 [2024-07-15 14:05:19.233605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.619 [2024-07-15 14:05:19.236611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.619 [2024-07-15 14:05:19.246073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.619 [2024-07-15 14:05:19.246439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.619 [2024-07-15 14:05:19.246478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.619 [2024-07-15 14:05:19.246492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.619 [2024-07-15 14:05:19.246701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.619 [2024-07-15 14:05:19.246947] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.619 [2024-07-15 14:05:19.246968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.619 [2024-07-15 14:05:19.246982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.619 [2024-07-15 14:05:19.249969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.619 [2024-07-15 14:05:19.259403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.619 [2024-07-15 14:05:19.259893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.619 [2024-07-15 14:05:19.259934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.619 [2024-07-15 14:05:19.259950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.619 [2024-07-15 14:05:19.260169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.619 [2024-07-15 14:05:19.260373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.619 [2024-07-15 14:05:19.260393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.619 [2024-07-15 14:05:19.260405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.619 [2024-07-15 14:05:19.263506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.619 [2024-07-15 14:05:19.272673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.619 [2024-07-15 14:05:19.273173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.619 [2024-07-15 14:05:19.273219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.619 [2024-07-15 14:05:19.273234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.619 [2024-07-15 14:05:19.273460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.619 [2024-07-15 14:05:19.273678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.619 [2024-07-15 14:05:19.273699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.619 [2024-07-15 14:05:19.273712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.619 [2024-07-15 14:05:19.276950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.619 [2024-07-15 14:05:19.286094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.619 [2024-07-15 14:05:19.286551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.619 [2024-07-15 14:05:19.286591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.619 [2024-07-15 14:05:19.286606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.619 [2024-07-15 14:05:19.286843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.619 [2024-07-15 14:05:19.287075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.619 [2024-07-15 14:05:19.287110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.620 [2024-07-15 14:05:19.287123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.620 [2024-07-15 14:05:19.290240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.620 [2024-07-15 14:05:19.299367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.620 [2024-07-15 14:05:19.299815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.620 [2024-07-15 14:05:19.299841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.620 [2024-07-15 14:05:19.299871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.620 [2024-07-15 14:05:19.300107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.620 [2024-07-15 14:05:19.300305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.620 [2024-07-15 14:05:19.300324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.620 [2024-07-15 14:05:19.300336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.620 [2024-07-15 14:05:19.303470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.620 [2024-07-15 14:05:19.312618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.620 [2024-07-15 14:05:19.313085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.620 [2024-07-15 14:05:19.313110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.620 [2024-07-15 14:05:19.313124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.620 [2024-07-15 14:05:19.313331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.620 [2024-07-15 14:05:19.313529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.620 [2024-07-15 14:05:19.313548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.620 [2024-07-15 14:05:19.313560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.620 [2024-07-15 14:05:19.316542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.620 [2024-07-15 14:05:19.325870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.620 [2024-07-15 14:05:19.326342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.620 [2024-07-15 14:05:19.326380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.620 [2024-07-15 14:05:19.326395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.620 [2024-07-15 14:05:19.326589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.620 [2024-07-15 14:05:19.326816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.620 [2024-07-15 14:05:19.326837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.620 [2024-07-15 14:05:19.326855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.620 [2024-07-15 14:05:19.329829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.620 [2024-07-15 14:05:19.339166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.620 [2024-07-15 14:05:19.339631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.620 [2024-07-15 14:05:19.339655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.620 [2024-07-15 14:05:19.339684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.620 [2024-07-15 14:05:19.339925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.620 [2024-07-15 14:05:19.340151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.620 [2024-07-15 14:05:19.340185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.620 [2024-07-15 14:05:19.340198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.620 [2024-07-15 14:05:19.343169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.620 [2024-07-15 14:05:19.352401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.620 [2024-07-15 14:05:19.352873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.620 [2024-07-15 14:05:19.352899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.620 [2024-07-15 14:05:19.352927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.620 [2024-07-15 14:05:19.353139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.620 [2024-07-15 14:05:19.353337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.620 [2024-07-15 14:05:19.353356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.620 [2024-07-15 14:05:19.353368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.620 [2024-07-15 14:05:19.356373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.620 [2024-07-15 14:05:19.365638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.620 [2024-07-15 14:05:19.366114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.620 [2024-07-15 14:05:19.366139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.620 [2024-07-15 14:05:19.366153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.620 [2024-07-15 14:05:19.366360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.620 [2024-07-15 14:05:19.366558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.620 [2024-07-15 14:05:19.366576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.620 [2024-07-15 14:05:19.366589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.620 [2024-07-15 14:05:19.369584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.620 [2024-07-15 14:05:19.378907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.620 [2024-07-15 14:05:19.379377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.620 [2024-07-15 14:05:19.379416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.620 [2024-07-15 14:05:19.379431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.620 [2024-07-15 14:05:19.379625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.620 [2024-07-15 14:05:19.379870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.620 [2024-07-15 14:05:19.379891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.620 [2024-07-15 14:05:19.379905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.620 [2024-07-15 14:05:19.382893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.620 [2024-07-15 14:05:19.392155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.620 [2024-07-15 14:05:19.392540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.620 [2024-07-15 14:05:19.392580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.620 [2024-07-15 14:05:19.392593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.620 [2024-07-15 14:05:19.392847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.620 [2024-07-15 14:05:19.393072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.620 [2024-07-15 14:05:19.393106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.620 [2024-07-15 14:05:19.393119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.620 [2024-07-15 14:05:19.396092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.620 [2024-07-15 14:05:19.405375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.620 [2024-07-15 14:05:19.405826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.620 [2024-07-15 14:05:19.405866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.620 [2024-07-15 14:05:19.405882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.620 [2024-07-15 14:05:19.406082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.620 [2024-07-15 14:05:19.406285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.620 [2024-07-15 14:05:19.406305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.620 [2024-07-15 14:05:19.406318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.620 [2024-07-15 14:05:19.409314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.620 [2024-07-15 14:05:19.418547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.620 [2024-07-15 14:05:19.419045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.620 [2024-07-15 14:05:19.419069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.620 [2024-07-15 14:05:19.419083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.620 [2024-07-15 14:05:19.419296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.620 [2024-07-15 14:05:19.419494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.620 [2024-07-15 14:05:19.419513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.620 [2024-07-15 14:05:19.419525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.620 [2024-07-15 14:05:19.422557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.620 [2024-07-15 14:05:19.432058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.620 [2024-07-15 14:05:19.432579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.620 [2024-07-15 14:05:19.432619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.620 [2024-07-15 14:05:19.432634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.620 [2024-07-15 14:05:19.432875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.621 [2024-07-15 14:05:19.433114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.621 [2024-07-15 14:05:19.433134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.621 [2024-07-15 14:05:19.433146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.621 [2024-07-15 14:05:19.436118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.621 [2024-07-15 14:05:19.445337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.621 [2024-07-15 14:05:19.445718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.621 [2024-07-15 14:05:19.445765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.621 [2024-07-15 14:05:19.445780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.621 [2024-07-15 14:05:19.446001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.621 [2024-07-15 14:05:19.446215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.621 [2024-07-15 14:05:19.446234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.621 [2024-07-15 14:05:19.446247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.621 [2024-07-15 14:05:19.449319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.621 [2024-07-15 14:05:19.459064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.881 [2024-07-15 14:05:19.459435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.881 [2024-07-15 14:05:19.459461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.881 [2024-07-15 14:05:19.459475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.881 [2024-07-15 14:05:19.459690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.881 [2024-07-15 14:05:19.459932] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.881 [2024-07-15 14:05:19.459954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.881 [2024-07-15 14:05:19.459968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.881 [2024-07-15 14:05:19.463124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.881 [2024-07-15 14:05:19.472541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.881 [2024-07-15 14:05:19.472945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.881 [2024-07-15 14:05:19.472972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.881 [2024-07-15 14:05:19.472987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.882 [2024-07-15 14:05:19.473215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.882 [2024-07-15 14:05:19.473413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.882 [2024-07-15 14:05:19.473432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.882 [2024-07-15 14:05:19.473444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.882 [2024-07-15 14:05:19.476485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.882 [2024-07-15 14:05:19.485975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.882 [2024-07-15 14:05:19.486392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.882 [2024-07-15 14:05:19.486417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.882 [2024-07-15 14:05:19.486431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.882 [2024-07-15 14:05:19.486645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.882 [2024-07-15 14:05:19.486896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.882 [2024-07-15 14:05:19.486918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.882 [2024-07-15 14:05:19.486932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.882 [2024-07-15 14:05:19.489972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.882 [2024-07-15 14:05:19.499311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.882 [2024-07-15 14:05:19.499724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.882 [2024-07-15 14:05:19.499755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.882 [2024-07-15 14:05:19.499785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.882 [2024-07-15 14:05:19.499985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.882 [2024-07-15 14:05:19.500206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.882 [2024-07-15 14:05:19.500224] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.882 [2024-07-15 14:05:19.500237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.882 [2024-07-15 14:05:19.503243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.882 [2024-07-15 14:05:19.512533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.882 [2024-07-15 14:05:19.512885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.882 [2024-07-15 14:05:19.512916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.882 [2024-07-15 14:05:19.512947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.882 [2024-07-15 14:05:19.513171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.882 [2024-07-15 14:05:19.513368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.882 [2024-07-15 14:05:19.513387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.882 [2024-07-15 14:05:19.513400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.882 [2024-07-15 14:05:19.516406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.882 [2024-07-15 14:05:19.525883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.882 [2024-07-15 14:05:19.526315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.882 [2024-07-15 14:05:19.526356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.882 [2024-07-15 14:05:19.526371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.882 [2024-07-15 14:05:19.526592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.882 [2024-07-15 14:05:19.526833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.882 [2024-07-15 14:05:19.526855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.882 [2024-07-15 14:05:19.526868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.882 [2024-07-15 14:05:19.530207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.882 [2024-07-15 14:05:19.539326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.882 [2024-07-15 14:05:19.539697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.882 [2024-07-15 14:05:19.539742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.882 [2024-07-15 14:05:19.539760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.882 [2024-07-15 14:05:19.539979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.882 [2024-07-15 14:05:19.540212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.882 [2024-07-15 14:05:19.540231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.882 [2024-07-15 14:05:19.540244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.882 [2024-07-15 14:05:19.543312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.882 [2024-07-15 14:05:19.552553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.882 [2024-07-15 14:05:19.552913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.882 [2024-07-15 14:05:19.552940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.882 [2024-07-15 14:05:19.552970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.882 [2024-07-15 14:05:19.553199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.882 [2024-07-15 14:05:19.553402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.882 [2024-07-15 14:05:19.553422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.882 [2024-07-15 14:05:19.553434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.882 [2024-07-15 14:05:19.556424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.882 [2024-07-15 14:05:19.565776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.882 [2024-07-15 14:05:19.566160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.882 [2024-07-15 14:05:19.566200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.882 [2024-07-15 14:05:19.566214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.882 [2024-07-15 14:05:19.566423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.882 [2024-07-15 14:05:19.566621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.882 [2024-07-15 14:05:19.566640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.882 [2024-07-15 14:05:19.566652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.882 [2024-07-15 14:05:19.569655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.882 [2024-07-15 14:05:19.579012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.882 [2024-07-15 14:05:19.579376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.882 [2024-07-15 14:05:19.579401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.882 [2024-07-15 14:05:19.579416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.882 [2024-07-15 14:05:19.579610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.882 [2024-07-15 14:05:19.579838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.882 [2024-07-15 14:05:19.579859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.882 [2024-07-15 14:05:19.579873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.882 [2024-07-15 14:05:19.582863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.882 [2024-07-15 14:05:19.592319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.882 [2024-07-15 14:05:19.592756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.882 [2024-07-15 14:05:19.592796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.882 [2024-07-15 14:05:19.592810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.882 [2024-07-15 14:05:19.593024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.882 [2024-07-15 14:05:19.593238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.882 [2024-07-15 14:05:19.593257] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.882 [2024-07-15 14:05:19.593269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.882 [2024-07-15 14:05:19.596269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.882 [2024-07-15 14:05:19.605595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.882 [2024-07-15 14:05:19.605973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.882 [2024-07-15 14:05:19.606000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.882 [2024-07-15 14:05:19.606015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.882 [2024-07-15 14:05:19.606225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.882 [2024-07-15 14:05:19.606441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.882 [2024-07-15 14:05:19.606460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.882 [2024-07-15 14:05:19.606474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.882 [2024-07-15 14:05:19.609495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.882 [2024-07-15 14:05:19.618823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.882 [2024-07-15 14:05:19.619271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.882 [2024-07-15 14:05:19.619309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.882 [2024-07-15 14:05:19.619324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.882 [2024-07-15 14:05:19.619518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.882 [2024-07-15 14:05:19.619731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.882 [2024-07-15 14:05:19.619759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.882 [2024-07-15 14:05:19.619773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.882 [2024-07-15 14:05:19.622769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.882 [2024-07-15 14:05:19.632028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.882 [2024-07-15 14:05:19.632472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.882 [2024-07-15 14:05:19.632496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.882 [2024-07-15 14:05:19.632510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.882 [2024-07-15 14:05:19.632731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.882 [2024-07-15 14:05:19.632966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.882 [2024-07-15 14:05:19.632987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.882 [2024-07-15 14:05:19.633000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.882 [2024-07-15 14:05:19.635985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.882 [2024-07-15 14:05:19.645216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.882 [2024-07-15 14:05:19.645668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.882 [2024-07-15 14:05:19.645718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.882 [2024-07-15 14:05:19.645745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.882 [2024-07-15 14:05:19.645977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.882 [2024-07-15 14:05:19.646213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.882 [2024-07-15 14:05:19.646232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.882 [2024-07-15 14:05:19.646245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.882 [2024-07-15 14:05:19.649214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.882 [2024-07-15 14:05:19.658492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.882 [2024-07-15 14:05:19.658933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.882 [2024-07-15 14:05:19.658958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.882 [2024-07-15 14:05:19.658988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.882 [2024-07-15 14:05:19.659198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.882 [2024-07-15 14:05:19.659396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.882 [2024-07-15 14:05:19.659415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.882 [2024-07-15 14:05:19.659427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.882 [2024-07-15 14:05:19.662407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.883 [2024-07-15 14:05:19.671846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.883 [2024-07-15 14:05:19.672340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.883 [2024-07-15 14:05:19.672378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.883 [2024-07-15 14:05:19.672393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.883 [2024-07-15 14:05:19.672587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.883 [2024-07-15 14:05:19.672828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.883 [2024-07-15 14:05:19.672849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.883 [2024-07-15 14:05:19.672863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.883 [2024-07-15 14:05:19.675852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.883 [2024-07-15 14:05:19.685153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.883 [2024-07-15 14:05:19.685621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.883 [2024-07-15 14:05:19.685670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.883 [2024-07-15 14:05:19.685685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.883 [2024-07-15 14:05:19.685933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.883 [2024-07-15 14:05:19.686172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.883 [2024-07-15 14:05:19.686195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.883 [2024-07-15 14:05:19.686209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.883 [2024-07-15 14:05:19.689183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.883 [2024-07-15 14:05:19.698454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.883 [2024-07-15 14:05:19.698874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.883 [2024-07-15 14:05:19.698900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.883 [2024-07-15 14:05:19.698930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.883 [2024-07-15 14:05:19.699142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.883 [2024-07-15 14:05:19.699340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.883 [2024-07-15 14:05:19.699359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.883 [2024-07-15 14:05:19.699371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.883 [2024-07-15 14:05:19.702339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.883 [2024-07-15 14:05:19.711745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.883 [2024-07-15 14:05:19.712192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.883 [2024-07-15 14:05:19.712216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:24.883 [2024-07-15 14:05:19.712245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:24.883 [2024-07-15 14:05:19.712439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:24.883 [2024-07-15 14:05:19.712636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.883 [2024-07-15 14:05:19.712655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.883 [2024-07-15 14:05:19.712667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.883 [2024-07-15 14:05:19.715702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.142 [2024-07-15 14:05:19.725148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.142 [2024-07-15 14:05:19.725605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.142 [2024-07-15 14:05:19.725645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.142 [2024-07-15 14:05:19.725660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.142 [2024-07-15 14:05:19.725913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.142 [2024-07-15 14:05:19.726152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.142 [2024-07-15 14:05:19.726171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.142 [2024-07-15 14:05:19.726184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.142 [2024-07-15 14:05:19.729231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.142 [2024-07-15 14:05:19.738332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.142 [2024-07-15 14:05:19.738762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.142 [2024-07-15 14:05:19.738802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.142 [2024-07-15 14:05:19.738818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.142 [2024-07-15 14:05:19.739032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.142 [2024-07-15 14:05:19.739230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.142 [2024-07-15 14:05:19.739249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.142 [2024-07-15 14:05:19.739262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.142 [2024-07-15 14:05:19.742275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3857533 Killed "${NVMF_APP[@]}" "$@" 00:26:25.142 14:05:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:25.142 14:05:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:25.142 14:05:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:25.142 14:05:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:25.142 14:05:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:25.142 [2024-07-15 14:05:19.751754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.142 [2024-07-15 14:05:19.752205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.142 [2024-07-15 14:05:19.752243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.142 [2024-07-15 14:05:19.752258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.142 [2024-07-15 14:05:19.752452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.142 [2024-07-15 14:05:19.752649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.142 [2024-07-15 14:05:19.752668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.142 [2024-07-15 14:05:19.752680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.142 14:05:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3858616 00:26:25.142 14:05:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:25.142 14:05:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3858616 00:26:25.142 14:05:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3858616 ']' 00:26:25.142 14:05:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.142 14:05:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:25.142 14:05:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.142 14:05:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:25.142 14:05:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:25.142 [2024-07-15 14:05:19.755763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.142 [2024-07-15 14:05:19.765194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.142 [2024-07-15 14:05:19.765566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.142 [2024-07-15 14:05:19.765593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.142 [2024-07-15 14:05:19.765608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.142 [2024-07-15 14:05:19.765833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.142 [2024-07-15 14:05:19.766059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.142 [2024-07-15 14:05:19.766079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.142 [2024-07-15 14:05:19.766106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.142 [2024-07-15 14:05:19.769140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.142 [2024-07-15 14:05:19.778622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.142 [2024-07-15 14:05:19.779107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.142 [2024-07-15 14:05:19.779134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.142 [2024-07-15 14:05:19.779163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.142 [2024-07-15 14:05:19.779398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.142 [2024-07-15 14:05:19.779629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.142 [2024-07-15 14:05:19.779650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.142 [2024-07-15 14:05:19.779663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.142 [2024-07-15 14:05:19.783140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.142 [2024-07-15 14:05:19.792005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.142 [2024-07-15 14:05:19.792465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.142 [2024-07-15 14:05:19.792489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.142 [2024-07-15 14:05:19.792517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.142 [2024-07-15 14:05:19.792712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.142 [2024-07-15 14:05:19.792969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.142 [2024-07-15 14:05:19.792991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.142 [2024-07-15 14:05:19.793005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.143 [2024-07-15 14:05:19.796117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.143 [2024-07-15 14:05:19.800421] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:26:25.143 [2024-07-15 14:05:19.800496] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:25.143 [2024-07-15 14:05:19.805335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.143 [2024-07-15 14:05:19.805733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.143 [2024-07-15 14:05:19.805770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.143 [2024-07-15 14:05:19.805801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.143 [2024-07-15 14:05:19.806008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.143 [2024-07-15 14:05:19.806239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.143 [2024-07-15 14:05:19.806259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.143 [2024-07-15 14:05:19.806272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.143 [2024-07-15 14:05:19.809240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.143 [2024-07-15 14:05:19.819098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.143 [2024-07-15 14:05:19.819541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.143 [2024-07-15 14:05:19.819568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.143 [2024-07-15 14:05:19.819597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.143 [2024-07-15 14:05:19.819836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.143 [2024-07-15 14:05:19.820068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.143 [2024-07-15 14:05:19.820103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.143 [2024-07-15 14:05:19.820116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.143 [2024-07-15 14:05:19.823215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.143 [2024-07-15 14:05:19.832647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.143 [2024-07-15 14:05:19.833021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.143 [2024-07-15 14:05:19.833065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.143 [2024-07-15 14:05:19.833080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.143 [2024-07-15 14:05:19.833295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.143 [2024-07-15 14:05:19.833514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.143 [2024-07-15 14:05:19.833534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.143 [2024-07-15 14:05:19.833548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.143 EAL: No free 2048 kB hugepages reported on node 1 00:26:25.143 [2024-07-15 14:05:19.836701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.143 [2024-07-15 14:05:19.846166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.143 [2024-07-15 14:05:19.846622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.143 [2024-07-15 14:05:19.846661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.143 [2024-07-15 14:05:19.846677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.143 [2024-07-15 14:05:19.846910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.143 [2024-07-15 14:05:19.847145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.143 [2024-07-15 14:05:19.847166] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.143 [2024-07-15 14:05:19.847179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.143 [2024-07-15 14:05:19.850286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.143 [2024-07-15 14:05:19.859528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.143 [2024-07-15 14:05:19.860008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.143 [2024-07-15 14:05:19.860049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.143 [2024-07-15 14:05:19.860065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.143 [2024-07-15 14:05:19.860280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.143 [2024-07-15 14:05:19.860484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.143 [2024-07-15 14:05:19.860504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.143 [2024-07-15 14:05:19.860517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.143 [2024-07-15 14:05:19.863599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.143 [2024-07-15 14:05:19.869163] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:25.143 [2024-07-15 14:05:19.872894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.143 [2024-07-15 14:05:19.873283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.143 [2024-07-15 14:05:19.873325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.143 [2024-07-15 14:05:19.873340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.143 [2024-07-15 14:05:19.873558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.143 [2024-07-15 14:05:19.873803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.143 [2024-07-15 14:05:19.873825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.143 [2024-07-15 14:05:19.873840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.143 [2024-07-15 14:05:19.876922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.143 [2024-07-15 14:05:19.886476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.143 [2024-07-15 14:05:19.887039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.143 [2024-07-15 14:05:19.887091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.143 [2024-07-15 14:05:19.887109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.143 [2024-07-15 14:05:19.887369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.143 [2024-07-15 14:05:19.887603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.143 [2024-07-15 14:05:19.887624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.143 [2024-07-15 14:05:19.887650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.143 [2024-07-15 14:05:19.890857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.143 [2024-07-15 14:05:19.899922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.143 [2024-07-15 14:05:19.900400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.143 [2024-07-15 14:05:19.900426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.143 [2024-07-15 14:05:19.900466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.143 [2024-07-15 14:05:19.900666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.143 [2024-07-15 14:05:19.900925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.143 [2024-07-15 14:05:19.900947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.143 [2024-07-15 14:05:19.900961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.143 [2024-07-15 14:05:19.904090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.143 [2024-07-15 14:05:19.913265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.143 [2024-07-15 14:05:19.913654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.144 [2024-07-15 14:05:19.913694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.144 [2024-07-15 14:05:19.913709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.144 [2024-07-15 14:05:19.913962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.144 [2024-07-15 14:05:19.914204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.144 [2024-07-15 14:05:19.914225] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.144 [2024-07-15 14:05:19.914238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.144 [2024-07-15 14:05:19.917298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.144 [2024-07-15 14:05:19.926771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.144 [2024-07-15 14:05:19.927261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.144 [2024-07-15 14:05:19.927303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.144 [2024-07-15 14:05:19.927319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.144 [2024-07-15 14:05:19.927522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.144 [2024-07-15 14:05:19.927762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.144 [2024-07-15 14:05:19.927784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.144 [2024-07-15 14:05:19.927799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.144 [2024-07-15 14:05:19.930888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.144 [2024-07-15 14:05:19.940168] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.144 [2024-07-15 14:05:19.940615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.144 [2024-07-15 14:05:19.940673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.144 [2024-07-15 14:05:19.940705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.144 [2024-07-15 14:05:19.940945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.144 [2024-07-15 14:05:19.941172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.144 [2024-07-15 14:05:19.941192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.144 [2024-07-15 14:05:19.941208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.144 [2024-07-15 14:05:19.944288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.144 [2024-07-15 14:05:19.953505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.144 [2024-07-15 14:05:19.953924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.144 [2024-07-15 14:05:19.953951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.144 [2024-07-15 14:05:19.953980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.144 [2024-07-15 14:05:19.954199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.144 [2024-07-15 14:05:19.954406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.144 [2024-07-15 14:05:19.954426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.144 [2024-07-15 14:05:19.954439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.144 [2024-07-15 14:05:19.957507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.144 [2024-07-15 14:05:19.966871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.144 [2024-07-15 14:05:19.967293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.144 [2024-07-15 14:05:19.967319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.144 [2024-07-15 14:05:19.967349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.144 [2024-07-15 14:05:19.967549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.144 [2024-07-15 14:05:19.967796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.144 [2024-07-15 14:05:19.967818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.144 [2024-07-15 14:05:19.967832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.144 [2024-07-15 14:05:19.970909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.144 [2024-07-15 14:05:19.980345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.144 [2024-07-15 14:05:19.980747] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:25.144 [2024-07-15 14:05:19.980760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.144 [2024-07-15 14:05:19.980782] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:25.144 [2024-07-15 14:05:19.980789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 wit[2024-07-15 14:05:19.980797] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is theh addr=10.0.0.2, port=4420 00:26:25.144 only 00:26:25.144 [2024-07-15 14:05:19.980812] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:25.144 [2024-07-15 14:05:19.980814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.144 [2024-07-15 14:05:19.980823] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:25.144 [2024-07-15 14:05:19.980894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:25.144 [2024-07-15 14:05:19.980958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:25.144 [2024-07-15 14:05:19.981037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.144 [2024-07-15 14:05:19.980958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.144 [2024-07-15 14:05:19.981253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.144 [2024-07-15 14:05:19.981274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.144 [2024-07-15 14:05:19.981288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.404 [2024-07-15 14:05:19.984533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.404 [2024-07-15 14:05:19.993841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.404 [2024-07-15 14:05:19.994361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.404 [2024-07-15 14:05:19.994412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.404 [2024-07-15 14:05:19.994431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.404 [2024-07-15 14:05:19.994647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.404 [2024-07-15 14:05:19.994885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.404 [2024-07-15 14:05:19.994907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.404 [2024-07-15 14:05:19.994923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.404 [2024-07-15 14:05:19.998129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.404 [2024-07-15 14:05:20.007712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.404 [2024-07-15 14:05:20.008307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.405 [2024-07-15 14:05:20.008344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.405 [2024-07-15 14:05:20.008365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.405 [2024-07-15 14:05:20.008596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.405 [2024-07-15 14:05:20.008829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.405 [2024-07-15 14:05:20.008852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.405 [2024-07-15 14:05:20.008869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.405 [2024-07-15 14:05:20.012083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.405 [2024-07-15 14:05:20.021454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.405 [2024-07-15 14:05:20.022007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.405 [2024-07-15 14:05:20.022055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.405 [2024-07-15 14:05:20.022075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.405 [2024-07-15 14:05:20.022299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.405 [2024-07-15 14:05:20.022521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.405 [2024-07-15 14:05:20.022542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.405 [2024-07-15 14:05:20.022558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.405 [2024-07-15 14:05:20.025842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.405 [2024-07-15 14:05:20.035115] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.405 [2024-07-15 14:05:20.035655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.405 [2024-07-15 14:05:20.035689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.405 [2024-07-15 14:05:20.035708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.405 [2024-07-15 14:05:20.035952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.405 [2024-07-15 14:05:20.036174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.405 [2024-07-15 14:05:20.036195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.405 [2024-07-15 14:05:20.036213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.405 [2024-07-15 14:05:20.039421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.405 [2024-07-15 14:05:20.048691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.405 [2024-07-15 14:05:20.049251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.405 [2024-07-15 14:05:20.049302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.405 [2024-07-15 14:05:20.049321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.405 [2024-07-15 14:05:20.049538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.405 [2024-07-15 14:05:20.049777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.405 [2024-07-15 14:05:20.049800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.405 [2024-07-15 14:05:20.049817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.405 [2024-07-15 14:05:20.053069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.405 [2024-07-15 14:05:20.062462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.405 [2024-07-15 14:05:20.063071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.405 [2024-07-15 14:05:20.063122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.405 [2024-07-15 14:05:20.063140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.405 [2024-07-15 14:05:20.063374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.405 [2024-07-15 14:05:20.063615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.405 [2024-07-15 14:05:20.063636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.405 [2024-07-15 14:05:20.063653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.405 [2024-07-15 14:05:20.066910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.405 [2024-07-15 14:05:20.076261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.405 [2024-07-15 14:05:20.076773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.405 [2024-07-15 14:05:20.076802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.405 [2024-07-15 14:05:20.076818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.405 [2024-07-15 14:05:20.077033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.405 [2024-07-15 14:05:20.077251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.405 [2024-07-15 14:05:20.077272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.405 [2024-07-15 14:05:20.077287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.405 [2024-07-15 14:05:20.080544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.405 [2024-07-15 14:05:20.090604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.405 [2024-07-15 14:05:20.091111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.405 [2024-07-15 14:05:20.091140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.405 [2024-07-15 14:05:20.091171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.405 [2024-07-15 14:05:20.091400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.405 [2024-07-15 14:05:20.091634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.405 [2024-07-15 14:05:20.091655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.405 [2024-07-15 14:05:20.091669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.405 [2024-07-15 14:05:20.094906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.405 [2024-07-15 14:05:20.104178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.405 [2024-07-15 14:05:20.104671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.405 [2024-07-15 14:05:20.104712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.405 [2024-07-15 14:05:20.104728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.405 [2024-07-15 14:05:20.104967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.405 [2024-07-15 14:05:20.105197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.405 [2024-07-15 14:05:20.105217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.405 [2024-07-15 14:05:20.105232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.405 [2024-07-15 14:05:20.108418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.405 [2024-07-15 14:05:20.117577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.405 [2024-07-15 14:05:20.118079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.405 [2024-07-15 14:05:20.118120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.405 [2024-07-15 14:05:20.118136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.405 [2024-07-15 14:05:20.118344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.405 [2024-07-15 14:05:20.118554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.405 [2024-07-15 14:05:20.118574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.405 [2024-07-15 14:05:20.118588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.405 [2024-07-15 14:05:20.121798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.405 [2024-07-15 14:05:20.131146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.405 [2024-07-15 14:05:20.131599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.405 [2024-07-15 14:05:20.131641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.405 [2024-07-15 14:05:20.131657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.405 [2024-07-15 14:05:20.131893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.405 [2024-07-15 14:05:20.132125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.405 [2024-07-15 14:05:20.132146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.405 [2024-07-15 14:05:20.132159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.405 [2024-07-15 14:05:20.135305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.405 [2024-07-15 14:05:20.144642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.405 [2024-07-15 14:05:20.145146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.405 [2024-07-15 14:05:20.145187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.405 [2024-07-15 14:05:20.145203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.405 [2024-07-15 14:05:20.145409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.405 [2024-07-15 14:05:20.145619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.405 [2024-07-15 14:05:20.145640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.405 [2024-07-15 14:05:20.145653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.405 [2024-07-15 14:05:20.148859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.406 [2024-07-15 14:05:20.158182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.406 [2024-07-15 14:05:20.158648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.406 [2024-07-15 14:05:20.158689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.406 [2024-07-15 14:05:20.158710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.406 [2024-07-15 14:05:20.158946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.406 [2024-07-15 14:05:20.159176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.406 [2024-07-15 14:05:20.159197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.406 [2024-07-15 14:05:20.159210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.406 [2024-07-15 14:05:20.162386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.406 [2024-07-15 14:05:20.171727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.406 [2024-07-15 14:05:20.172215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.406 [2024-07-15 14:05:20.172256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.406 [2024-07-15 14:05:20.172271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.406 [2024-07-15 14:05:20.172478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.406 [2024-07-15 14:05:20.172688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.406 [2024-07-15 14:05:20.172708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.406 [2024-07-15 14:05:20.172745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.406 [2024-07-15 14:05:20.175906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.406 [2024-07-15 14:05:20.185220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.406 [2024-07-15 14:05:20.185694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.406 [2024-07-15 14:05:20.185720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.406 [2024-07-15 14:05:20.185759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.406 [2024-07-15 14:05:20.185988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.406 [2024-07-15 14:05:20.186217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.406 [2024-07-15 14:05:20.186238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.406 [2024-07-15 14:05:20.186251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.406 [2024-07-15 14:05:20.189428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.406 [2024-07-15 14:05:20.198798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.406 [2024-07-15 14:05:20.199317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.406 [2024-07-15 14:05:20.199358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.406 [2024-07-15 14:05:20.199373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.406 [2024-07-15 14:05:20.199579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.406 [2024-07-15 14:05:20.199816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.406 [2024-07-15 14:05:20.199843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.406 [2024-07-15 14:05:20.199857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.406 [2024-07-15 14:05:20.203075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.406 [2024-07-15 14:05:20.212241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.406 [2024-07-15 14:05:20.212729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.406 [2024-07-15 14:05:20.212777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.406 [2024-07-15 14:05:20.212793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.406 [2024-07-15 14:05:20.213021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.406 [2024-07-15 14:05:20.213247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.406 [2024-07-15 14:05:20.213268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.406 [2024-07-15 14:05:20.213281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.406 [2024-07-15 14:05:20.216455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.406 [2024-07-15 14:05:20.225813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.406 [2024-07-15 14:05:20.226319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.406 [2024-07-15 14:05:20.226345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.406 [2024-07-15 14:05:20.226376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.406 [2024-07-15 14:05:20.226582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.406 [2024-07-15 14:05:20.226820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.406 [2024-07-15 14:05:20.226842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.406 [2024-07-15 14:05:20.226856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.406 [2024-07-15 14:05:20.230054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.406 [2024-07-15 14:05:20.239411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.406 [2024-07-15 14:05:20.239892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.406 [2024-07-15 14:05:20.239920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.406 [2024-07-15 14:05:20.239936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.406 [2024-07-15 14:05:20.240161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.406 [2024-07-15 14:05:20.240372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.406 [2024-07-15 14:05:20.240393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.406 [2024-07-15 14:05:20.240406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.406 [2024-07-15 14:05:20.243682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.665 [2024-07-15 14:05:20.253049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.665 [2024-07-15 14:05:20.253535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.665 [2024-07-15 14:05:20.253576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.665 [2024-07-15 14:05:20.253592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.665 [2024-07-15 14:05:20.253828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.665 [2024-07-15 14:05:20.254062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.665 [2024-07-15 14:05:20.254082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.665 [2024-07-15 14:05:20.254095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.665 [2024-07-15 14:05:20.257274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.665 [2024-07-15 14:05:20.266423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.665 [2024-07-15 14:05:20.266909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.665 [2024-07-15 14:05:20.266950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.665 [2024-07-15 14:05:20.266966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.665 [2024-07-15 14:05:20.267173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.665 [2024-07-15 14:05:20.267383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.665 [2024-07-15 14:05:20.267404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.665 [2024-07-15 14:05:20.267417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.665 [2024-07-15 14:05:20.270562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.665 [2024-07-15 14:05:20.279891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.665 [2024-07-15 14:05:20.280372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.665 [2024-07-15 14:05:20.280413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.665 [2024-07-15 14:05:20.280429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.665 [2024-07-15 14:05:20.280635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.665 [2024-07-15 14:05:20.280875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.665 [2024-07-15 14:05:20.280897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.665 [2024-07-15 14:05:20.280911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.665 [2024-07-15 14:05:20.284118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.665 [2024-07-15 14:05:20.293446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.665 [2024-07-15 14:05:20.293935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.665 [2024-07-15 14:05:20.293963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.665 [2024-07-15 14:05:20.293984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.665 [2024-07-15 14:05:20.294199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.665 [2024-07-15 14:05:20.294416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.665 [2024-07-15 14:05:20.294437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.665 [2024-07-15 14:05:20.294451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.665 [2024-07-15 14:05:20.297747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.665 [2024-07-15 14:05:20.307045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.665 [2024-07-15 14:05:20.307453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.665 [2024-07-15 14:05:20.307490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.665 [2024-07-15 14:05:20.307520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.665 [2024-07-15 14:05:20.307750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.665 [2024-07-15 14:05:20.307973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.665 [2024-07-15 14:05:20.307994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.665 [2024-07-15 14:05:20.308007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.665 [2024-07-15 14:05:20.311292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.665 [2024-07-15 14:05:20.320423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.665 [2024-07-15 14:05:20.320798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.665 [2024-07-15 14:05:20.320840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.665 [2024-07-15 14:05:20.320856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.665 [2024-07-15 14:05:20.321098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.665 [2024-07-15 14:05:20.321309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.665 [2024-07-15 14:05:20.321329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.665 [2024-07-15 14:05:20.321343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.665 [2024-07-15 14:05:20.324493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.665 [2024-07-15 14:05:20.333992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.665 [2024-07-15 14:05:20.334363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.665 [2024-07-15 14:05:20.334390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.665 [2024-07-15 14:05:20.334405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.665 [2024-07-15 14:05:20.334613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.665 [2024-07-15 14:05:20.334852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.665 [2024-07-15 14:05:20.334879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.665 [2024-07-15 14:05:20.334894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.665 [2024-07-15 14:05:20.338090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.665 [2024-07-15 14:05:20.347417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.665 [2024-07-15 14:05:20.347797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.665 [2024-07-15 14:05:20.347839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.665 [2024-07-15 14:05:20.347854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.665 [2024-07-15 14:05:20.348076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.665 [2024-07-15 14:05:20.348286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.665 [2024-07-15 14:05:20.348306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.665 [2024-07-15 14:05:20.348320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.665 [2024-07-15 14:05:20.351512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.665 [2024-07-15 14:05:20.360912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.665 [2024-07-15 14:05:20.361362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.665 [2024-07-15 14:05:20.361404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.665 [2024-07-15 14:05:20.361420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.665 [2024-07-15 14:05:20.361633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.665 [2024-07-15 14:05:20.361861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.665 [2024-07-15 14:05:20.361882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.666 [2024-07-15 14:05:20.361896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.666 [2024-07-15 14:05:20.365148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.666 [2024-07-15 14:05:20.374537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.666 [2024-07-15 14:05:20.375026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.666 [2024-07-15 14:05:20.375055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.666 [2024-07-15 14:05:20.375071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.666 [2024-07-15 14:05:20.375284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.666 [2024-07-15 14:05:20.375502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.666 [2024-07-15 14:05:20.375523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.666 [2024-07-15 14:05:20.375537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.666 [2024-07-15 14:05:20.378765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.666 [2024-07-15 14:05:20.388091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.666 [2024-07-15 14:05:20.388582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.666 [2024-07-15 14:05:20.388622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.666 [2024-07-15 14:05:20.388638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.666 [2024-07-15 14:05:20.388875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.666 [2024-07-15 14:05:20.389107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.666 [2024-07-15 14:05:20.389128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.666 [2024-07-15 14:05:20.389141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.666 [2024-07-15 14:05:20.392342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.666 [2024-07-15 14:05:20.401584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.666 [2024-07-15 14:05:20.402022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.666 [2024-07-15 14:05:20.402049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.666 [2024-07-15 14:05:20.402079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.666 [2024-07-15 14:05:20.402300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.666 [2024-07-15 14:05:20.402510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.666 [2024-07-15 14:05:20.402530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.666 [2024-07-15 14:05:20.402544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.666 [2024-07-15 14:05:20.405705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.666 [2024-07-15 14:05:20.415089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.666 [2024-07-15 14:05:20.415458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.666 [2024-07-15 14:05:20.415485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.666 [2024-07-15 14:05:20.415502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.666 [2024-07-15 14:05:20.415708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.666 [2024-07-15 14:05:20.415951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.666 [2024-07-15 14:05:20.415972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.666 [2024-07-15 14:05:20.415986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.666 [2024-07-15 14:05:20.419182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.666 [2024-07-15 14:05:20.428499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.666 [2024-07-15 14:05:20.428906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.666 [2024-07-15 14:05:20.428933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.666 [2024-07-15 14:05:20.428949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.666 [2024-07-15 14:05:20.429194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.666 [2024-07-15 14:05:20.429405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.666 [2024-07-15 14:05:20.429426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.666 [2024-07-15 14:05:20.429439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.666 [2024-07-15 14:05:20.432615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.666 [2024-07-15 14:05:20.442181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.666 [2024-07-15 14:05:20.442673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.666 [2024-07-15 14:05:20.442715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.666 [2024-07-15 14:05:20.442731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.666 [2024-07-15 14:05:20.442953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.666 [2024-07-15 14:05:20.443170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.666 [2024-07-15 14:05:20.443191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.666 [2024-07-15 14:05:20.443205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.666 [2024-07-15 14:05:20.446454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.666 [2024-07-15 14:05:20.455963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.666 [2024-07-15 14:05:20.456436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.666 [2024-07-15 14:05:20.456479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.666 [2024-07-15 14:05:20.456495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.666 [2024-07-15 14:05:20.456708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.666 [2024-07-15 14:05:20.456934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.666 [2024-07-15 14:05:20.456955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.666 [2024-07-15 14:05:20.456970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.666 [2024-07-15 14:05:20.460145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.666 [2024-07-15 14:05:20.469445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.666 [2024-07-15 14:05:20.469908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.666 [2024-07-15 14:05:20.469935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.666 [2024-07-15 14:05:20.469967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.666 [2024-07-15 14:05:20.470192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.666 [2024-07-15 14:05:20.470403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.666 [2024-07-15 14:05:20.470423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.666 [2024-07-15 14:05:20.470441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.666 [2024-07-15 14:05:20.473605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.666 [2024-07-15 14:05:20.482944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.666 [2024-07-15 14:05:20.483411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.666 [2024-07-15 14:05:20.483452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.666 [2024-07-15 14:05:20.483468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.666 [2024-07-15 14:05:20.483674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.666 [2024-07-15 14:05:20.483915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.666 [2024-07-15 14:05:20.483937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.666 [2024-07-15 14:05:20.483951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.666 [2024-07-15 14:05:20.487156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.666 [2024-07-15 14:05:20.496474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.666 [2024-07-15 14:05:20.496899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.666 [2024-07-15 14:05:20.496926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.666 [2024-07-15 14:05:20.496957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.666 [2024-07-15 14:05:20.497184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.666 [2024-07-15 14:05:20.497394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.666 [2024-07-15 14:05:20.497414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.666 [2024-07-15 14:05:20.497427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.666 [2024-07-15 14:05:20.500605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.924 [2024-07-15 14:05:20.510173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.924 [2024-07-15 14:05:20.510617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.924 [2024-07-15 14:05:20.510643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.924 [2024-07-15 14:05:20.510674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.925 [2024-07-15 14:05:20.510912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.925 [2024-07-15 14:05:20.511144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.925 [2024-07-15 14:05:20.511165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.925 [2024-07-15 14:05:20.511179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.925 [2024-07-15 14:05:20.514332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.925 [2024-07-15 14:05:20.523660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.925 [2024-07-15 14:05:20.524040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.925 [2024-07-15 14:05:20.524093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.925 [2024-07-15 14:05:20.524110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.925 [2024-07-15 14:05:20.524316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.925 [2024-07-15 14:05:20.524526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.925 [2024-07-15 14:05:20.524546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.925 [2024-07-15 14:05:20.524560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.925 [2024-07-15 14:05:20.527759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.925 [2024-07-15 14:05:20.537101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.925 [2024-07-15 14:05:20.537490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.925 [2024-07-15 14:05:20.537531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.925 [2024-07-15 14:05:20.537546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.925 [2024-07-15 14:05:20.537793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.925 [2024-07-15 14:05:20.538010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.925 [2024-07-15 14:05:20.538031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.925 [2024-07-15 14:05:20.538045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.925 [2024-07-15 14:05:20.541242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.925 [2024-07-15 14:05:20.550551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.925 [2024-07-15 14:05:20.550973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.925 [2024-07-15 14:05:20.551001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.925 [2024-07-15 14:05:20.551017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.925 [2024-07-15 14:05:20.551231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.925 [2024-07-15 14:05:20.551448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.925 [2024-07-15 14:05:20.551469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.925 [2024-07-15 14:05:20.551483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.925 [2024-07-15 14:05:20.554852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.925 [2024-07-15 14:05:20.563967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.925 [2024-07-15 14:05:20.564366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.925 [2024-07-15 14:05:20.564407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.925 [2024-07-15 14:05:20.564422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.925 [2024-07-15 14:05:20.564643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.925 [2024-07-15 14:05:20.564890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.925 [2024-07-15 14:05:20.564913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.925 [2024-07-15 14:05:20.564927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.925 [2024-07-15 14:05:20.568120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.925 [2024-07-15 14:05:20.577409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.925 [2024-07-15 14:05:20.577797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.925 [2024-07-15 14:05:20.577838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.925 [2024-07-15 14:05:20.577853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.925 [2024-07-15 14:05:20.578095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.925 [2024-07-15 14:05:20.578305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.925 [2024-07-15 14:05:20.578326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.925 [2024-07-15 14:05:20.578339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.925 [2024-07-15 14:05:20.581517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.925 [2024-07-15 14:05:20.590870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.925 [2024-07-15 14:05:20.591231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.925 [2024-07-15 14:05:20.591259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.925 [2024-07-15 14:05:20.591274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.925 [2024-07-15 14:05:20.591481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.925 [2024-07-15 14:05:20.591692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.925 [2024-07-15 14:05:20.591712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.925 [2024-07-15 14:05:20.591725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.925 [2024-07-15 14:05:20.594939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.925 [2024-07-15 14:05:20.604282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.925 [2024-07-15 14:05:20.604650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.925 [2024-07-15 14:05:20.604691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.925 [2024-07-15 14:05:20.604706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.925 [2024-07-15 14:05:20.604943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.925 [2024-07-15 14:05:20.605173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.925 [2024-07-15 14:05:20.605194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.925 [2024-07-15 14:05:20.605208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.925 [2024-07-15 14:05:20.608452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.925 [2024-07-15 14:05:20.617908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.925 [2024-07-15 14:05:20.618309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.925 [2024-07-15 14:05:20.618350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.925 [2024-07-15 14:05:20.618365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.925 [2024-07-15 14:05:20.618605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.925 [2024-07-15 14:05:20.618835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.925 [2024-07-15 14:05:20.618857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.925 [2024-07-15 14:05:20.618870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.925 [2024-07-15 14:05:20.622219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.925 [2024-07-15 14:05:20.631424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.925 [2024-07-15 14:05:20.631789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.925 [2024-07-15 14:05:20.631832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.925 [2024-07-15 14:05:20.631848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.925 [2024-07-15 14:05:20.632090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.925 [2024-07-15 14:05:20.632301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.925 [2024-07-15 14:05:20.632322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.925 [2024-07-15 14:05:20.632335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.925 [2024-07-15 14:05:20.635485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.925 [2024-07-15 14:05:20.644852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.925 [2024-07-15 14:05:20.645231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.925 [2024-07-15 14:05:20.645272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.925 [2024-07-15 14:05:20.645287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.925 [2024-07-15 14:05:20.645508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.925 [2024-07-15 14:05:20.645719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.925 [2024-07-15 14:05:20.645762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.925 [2024-07-15 14:05:20.645780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.925 [2024-07-15 14:05:20.648986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.925 [2024-07-15 14:05:20.658316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.925 [2024-07-15 14:05:20.658684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.925 [2024-07-15 14:05:20.658726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.925 [2024-07-15 14:05:20.658754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.925 [2024-07-15 14:05:20.658982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.925 [2024-07-15 14:05:20.659213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.925 [2024-07-15 14:05:20.659233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.925 [2024-07-15 14:05:20.659246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.925 [2024-07-15 14:05:20.662396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.925 [2024-07-15 14:05:20.671788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.925 [2024-07-15 14:05:20.672168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.925 [2024-07-15 14:05:20.672194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.925 [2024-07-15 14:05:20.672209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.925 [2024-07-15 14:05:20.672430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.925 [2024-07-15 14:05:20.672641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.925 [2024-07-15 14:05:20.672661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.925 [2024-07-15 14:05:20.672674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.925 [2024-07-15 14:05:20.675899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.925 [2024-07-15 14:05:20.685241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.925 [2024-07-15 14:05:20.685682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.925 [2024-07-15 14:05:20.685723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.925 [2024-07-15 14:05:20.685746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.925 [2024-07-15 14:05:20.685976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.925 [2024-07-15 14:05:20.686207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.925 [2024-07-15 14:05:20.686228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.925 [2024-07-15 14:05:20.686241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.925 [2024-07-15 14:05:20.689433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.925 [2024-07-15 14:05:20.698787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.925 [2024-07-15 14:05:20.699181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.925 [2024-07-15 14:05:20.699207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.925 [2024-07-15 14:05:20.699238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.925 [2024-07-15 14:05:20.699445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.925 [2024-07-15 14:05:20.699655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.925 [2024-07-15 14:05:20.699681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.925 [2024-07-15 14:05:20.699695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.925 [2024-07-15 14:05:20.702882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.925 [2024-07-15 14:05:20.712351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.925 [2024-07-15 14:05:20.712703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.925 [2024-07-15 14:05:20.712752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.926 [2024-07-15 14:05:20.712770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.926 [2024-07-15 14:05:20.712983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.926 [2024-07-15 14:05:20.713210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.926 [2024-07-15 14:05:20.713231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.926 [2024-07-15 14:05:20.713245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.926 [2024-07-15 14:05:20.716469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.926 [2024-07-15 14:05:20.725855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.926 [2024-07-15 14:05:20.726256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.926 [2024-07-15 14:05:20.726284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.926 [2024-07-15 14:05:20.726299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.926 [2024-07-15 14:05:20.726514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.926 [2024-07-15 14:05:20.726751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.926 [2024-07-15 14:05:20.726772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.926 [2024-07-15 14:05:20.726785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.926 [2024-07-15 14:05:20.730007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.926 [2024-07-15 14:05:20.739473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.926 [2024-07-15 14:05:20.739837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.926 [2024-07-15 14:05:20.739865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.926 [2024-07-15 14:05:20.739881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.926 [2024-07-15 14:05:20.740088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.926 [2024-07-15 14:05:20.740299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.926 [2024-07-15 14:05:20.740320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.926 [2024-07-15 14:05:20.740333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.926 [2024-07-15 14:05:20.743587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:25.926 [2024-07-15 14:05:20.753117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.926 [2024-07-15 14:05:20.753494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.926 [2024-07-15 14:05:20.753522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:25.926 [2024-07-15 14:05:20.753538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:25.926 [2024-07-15 14:05:20.753789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:25.926 [2024-07-15 14:05:20.754007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:25.926 [2024-07-15 14:05:20.754028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:25.926 [2024-07-15 14:05:20.754056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:25.926 [2024-07-15 14:05:20.757379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.183 [2024-07-15 14:05:20.766908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.183 [2024-07-15 14:05:20.767287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.183 [2024-07-15 14:05:20.767315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:26.183 [2024-07-15 14:05:20.767331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:26.183 [2024-07-15 14:05:20.767545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:26.183 [2024-07-15 14:05:20.767772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.183 [2024-07-15 14:05:20.767793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.183 [2024-07-15 14:05:20.767807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.183 [2024-07-15 14:05:20.771111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.183 [2024-07-15 14:05:20.780545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.183 [2024-07-15 14:05:20.781039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.183 [2024-07-15 14:05:20.781066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:26.183 [2024-07-15 14:05:20.781096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:26.183 [2024-07-15 14:05:20.781310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:26.183 [2024-07-15 14:05:20.781527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.183 [2024-07-15 14:05:20.781548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.183 [2024-07-15 14:05:20.781561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.183 [2024-07-15 14:05:20.784890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.183 14:05:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:26.183 14:05:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:26.183 14:05:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:26.183 14:05:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:26.183 14:05:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:26.183 [2024-07-15 14:05:20.794091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.183 [2024-07-15 14:05:20.794480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.183 [2024-07-15 14:05:20.794507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:26.183 [2024-07-15 14:05:20.794523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:26.183 [2024-07-15 14:05:20.794765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:26.183 [2024-07-15 14:05:20.794983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.183 [2024-07-15 14:05:20.795006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.183 [2024-07-15 14:05:20.795020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.183 [2024-07-15 14:05:20.798406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.183 [2024-07-15 14:05:20.807671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.183 [2024-07-15 14:05:20.808049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.183 [2024-07-15 14:05:20.808078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:26.183 [2024-07-15 14:05:20.808094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:26.183 [2024-07-15 14:05:20.808333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:26.183 [2024-07-15 14:05:20.808550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.183 [2024-07-15 14:05:20.808572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.183 [2024-07-15 14:05:20.808586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.183 14:05:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:26.183 14:05:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:26.183 14:05:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.183 14:05:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:26.183 [2024-07-15 14:05:20.811913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.183 [2024-07-15 14:05:20.815530] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:26.183 [2024-07-15 14:05:20.821339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.183 14:05:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.183 14:05:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:26.183 [2024-07-15 14:05:20.821765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.183 [2024-07-15 14:05:20.821803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:26.183 [2024-07-15 14:05:20.821820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:26.183 14:05:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.184 14:05:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:26.184 [2024-07-15 14:05:20.822040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:26.184 [2024-07-15 14:05:20.822267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.184 [2024-07-15 14:05:20.822295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.184 [2024-07-15 14:05:20.822325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.184 [2024-07-15 14:05:20.825549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.184 [2024-07-15 14:05:20.835079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.184 [2024-07-15 14:05:20.835568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.184 [2024-07-15 14:05:20.835608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:26.184 [2024-07-15 14:05:20.835623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:26.184 [2024-07-15 14:05:20.835855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:26.184 [2024-07-15 14:05:20.836101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.184 [2024-07-15 14:05:20.836121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.184 [2024-07-15 14:05:20.836134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.184 [2024-07-15 14:05:20.839394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.184 [2024-07-15 14:05:20.848638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.184 [2024-07-15 14:05:20.849256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.184 [2024-07-15 14:05:20.849295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:26.184 [2024-07-15 14:05:20.849329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:26.184 [2024-07-15 14:05:20.849548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:26.184 [2024-07-15 14:05:20.849800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.184 [2024-07-15 14:05:20.849822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.184 [2024-07-15 14:05:20.849839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.184 [2024-07-15 14:05:20.853129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.184 Malloc0 00:26:26.184 14:05:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.184 14:05:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:26.184 14:05:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.184 14:05:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:26.184 [2024-07-15 14:05:20.862474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.184 [2024-07-15 14:05:20.862873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.184 [2024-07-15 14:05:20.862903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:26.184 [2024-07-15 14:05:20.862920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:26.184 [2024-07-15 14:05:20.863137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:26.184 [2024-07-15 14:05:20.863367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.184 [2024-07-15 14:05:20.863387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.184 [2024-07-15 14:05:20.863409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.184 14:05:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.184 14:05:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:26.184 [2024-07-15 14:05:20.866704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.184 14:05:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.184 14:05:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:26.184 14:05:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.184 14:05:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:26.184 14:05:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.184 14:05:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:26.184 [2024-07-15 14:05:20.876218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.184 [2024-07-15 14:05:20.876675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.184 [2024-07-15 14:05:20.876717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174b540 with addr=10.0.0.2, port=4420 00:26:26.184 [2024-07-15 14:05:20.876734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174b540 is same with the state(5) to be set 00:26:26.184 [2024-07-15 14:05:20.876958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174b540 (9): Bad file descriptor 00:26:26.184 [2024-07-15 14:05:20.877188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.184 [2024-07-15 14:05:20.877209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.184 [2024-07-15 14:05:20.877223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.184 [2024-07-15 14:05:20.878450] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:26.184 [2024-07-15 14:05:20.880452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.184 14:05:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.184 14:05:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3857832 00:26:26.184 [2024-07-15 14:05:20.889907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.184 [2024-07-15 14:05:20.966707] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:36.147 00:26:36.147 Latency(us) 00:26:36.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.147 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:36.147 Verification LBA range: start 0x0 length 0x4000 00:26:36.147 Nvme1n1 : 15.01 6409.42 25.04 12153.66 0.00 6873.47 861.68 19126.80 00:26:36.147 =================================================================================================================== 00:26:36.147 Total : 6409.42 25.04 12153.66 0.00 6873.47 861.68 19126.80 00:26:36.147 14:05:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:36.147 14:05:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:36.147 14:05:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.147 14:05:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.147 14:05:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.147 14:05:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:36.147 14:05:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:36.147 14:05:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:36.147 14:05:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:26:36.147 14:05:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:36.147 14:05:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:26:36.147 14:05:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:36.147 14:05:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:36.147 rmmod nvme_tcp 00:26:36.147 rmmod nvme_fabrics 00:26:36.147 rmmod nvme_keyring 00:26:36.147 14:05:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:36.147 14:05:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:26:36.147 14:05:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:26:36.147 14:05:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3858616 ']' 00:26:36.147 14:05:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3858616 00:26:36.147 14:05:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 3858616 ']' 00:26:36.147 14:05:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 3858616 00:26:36.148 14:05:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:26:36.148 14:05:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:36.148 14:05:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3858616 00:26:36.148 14:05:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:36.148 14:05:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:36.148 14:05:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3858616' 00:26:36.148 killing process with pid 3858616 00:26:36.148 14:05:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 3858616 00:26:36.148 14:05:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 3858616 00:26:36.148 14:05:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:36.148 14:05:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:36.148 14:05:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:36.148 14:05:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:36.148 14:05:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:36.148 14:05:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.148 14:05:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:36.148 14:05:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.526 14:05:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:37.526 00:26:37.526 real 0m23.275s 00:26:37.526 user 1m2.761s 00:26:37.526 sys 0m4.486s 00:26:37.526 14:05:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:37.526 14:05:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:37.526 ************************************ 00:26:37.526 END TEST nvmf_bdevperf 00:26:37.526 ************************************ 00:26:37.526 14:05:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:37.526 14:05:32 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:37.526 14:05:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:37.526 14:05:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:37.526 14:05:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:37.526 ************************************ 00:26:37.526 START TEST nvmf_target_disconnect 00:26:37.526 ************************************ 00:26:37.526 14:05:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:37.526 * Looking for test storage... 00:26:37.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:37.526 14:05:32 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:37.526 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:37.526 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:37.526 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:37.526 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:37.526 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:37.526 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:37.526 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:37.526 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:37.526 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:37.526 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:37.526 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:37.526 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:37.526 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:37.526 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:37.526 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:37.526 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:37.526 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:37.526 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:37.526 14:05:32 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:37.526 14:05:32 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:26:37.527 14:05:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:39.433 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:39.433 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:39.433 Found net devices under 0000:84:00.0: cvl_0_0 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:39.433 Found net devices under 0000:84:00.1: cvl_0_1 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:39.433 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:39.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:39.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:26:39.693 00:26:39.693 --- 10.0.0.2 ping statistics --- 00:26:39.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.693 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:39.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:39.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:26:39.693 00:26:39.693 --- 10.0.0.1 ping statistics --- 00:26:39.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.693 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:39.693 ************************************ 00:26:39.693 START TEST nvmf_target_disconnect_tc1 00:26:39.693 ************************************ 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:39.693 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.693 [2024-07-15 14:05:34.518953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.693 [2024-07-15 14:05:34.519034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fa5790 with addr=10.0.0.2, port=4420 00:26:39.693 [2024-07-15 14:05:34.519064] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:39.693 [2024-07-15 14:05:34.519099] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:39.693 [2024-07-15 14:05:34.519111] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:26:39.693 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:39.693 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:39.693 Initializing NVMe Controllers 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:39.693 00:26:39.693 real 0m0.090s 00:26:39.693 user 0m0.036s 00:26:39.693 sys 0m0.053s 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:39.693 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:39.693 ************************************ 00:26:39.693 END TEST nvmf_target_disconnect_tc1 00:26:39.693 ************************************ 00:26:39.952 14:05:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:26:39.952 14:05:34 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:39.952 14:05:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:39.952 14:05:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:39.952 14:05:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:39.952 ************************************ 00:26:39.952 START TEST nvmf_target_disconnect_tc2 00:26:39.952 ************************************ 00:26:39.952 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:26:39.952 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:26:39.952 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:39.952 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:39.952 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:39.952 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:39.952 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3861791 00:26:39.952 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:39.952 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3861791 00:26:39.952 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3861791 ']' 00:26:39.952 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.952 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:39.952 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.952 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:39.952 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:39.952 [2024-07-15 14:05:34.635399] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:26:39.952 [2024-07-15 14:05:34.635471] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.952 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.952 [2024-07-15 14:05:34.698803] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:40.210 [2024-07-15 14:05:34.808686] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:40.210 [2024-07-15 14:05:34.808764] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:40.210 [2024-07-15 14:05:34.808779] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:40.210 [2024-07-15 14:05:34.808806] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:40.210 [2024-07-15 14:05:34.808816] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:40.210 [2024-07-15 14:05:34.809138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:26:40.210 [2024-07-15 14:05:34.809198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:26:40.210 [2024-07-15 14:05:34.809270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:40.210 [2024-07-15 14:05:34.809266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:26:40.211 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:40.211 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:26:40.211 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:40.211 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:40.211 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.211 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:40.211 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:40.211 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.211 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.211 Malloc0 00:26:40.211 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.211 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:40.211 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.211 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.211 [2024-07-15 14:05:34.976518] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:40.211 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.211 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:40.211 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.211 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.211 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.211 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:40.211 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.211 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.211 14:05:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.211 14:05:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:40.211 14:05:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.211 14:05:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.211 [2024-07-15 14:05:35.004809] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:40.211 14:05:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.211 14:05:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:40.211 14:05:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.211 14:05:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.211 14:05:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.211 14:05:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3861819 00:26:40.211 14:05:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:40.211 14:05:35 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:40.468 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.388 14:05:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3861791 00:26:42.388 14:05:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:42.388 Read completed with error (sct=0, sc=8) 00:26:42.388 starting I/O failed 00:26:42.388 Read completed with error (sct=0, sc=8) 00:26:42.388 starting I/O failed 00:26:42.388 Read completed with error (sct=0, sc=8) 00:26:42.388 starting I/O failed 00:26:42.388 Write completed with error (sct=0, sc=8) 00:26:42.388 starting I/O failed 00:26:42.388 Read completed with error (sct=0, sc=8) 00:26:42.388 starting I/O failed 00:26:42.388 Write completed with error (sct=0, sc=8) 00:26:42.388 starting I/O failed 00:26:42.388 Write completed with error (sct=0, sc=8) 00:26:42.388 starting I/O failed 00:26:42.388 Write completed with error (sct=0, sc=8) 00:26:42.388 starting I/O failed 00:26:42.388 Write completed with error (sct=0, sc=8) 00:26:42.388 starting I/O failed 00:26:42.388 Read completed with error (sct=0, sc=8) 00:26:42.388 starting I/O failed 00:26:42.388 Write completed with error (sct=0, sc=8) 00:26:42.388 starting I/O failed 00:26:42.388 Read completed with error (sct=0, sc=8) 00:26:42.388 starting I/O failed 00:26:42.388 Read completed with error (sct=0, sc=8) 00:26:42.388 starting I/O failed 00:26:42.388 Write completed with error (sct=0, sc=8) 00:26:42.388 starting I/O failed 00:26:42.388 Write completed with error (sct=0, sc=8) 00:26:42.388 starting I/O failed 00:26:42.388 Write completed with error (sct=0, sc=8) 00:26:42.388 starting I/O failed 00:26:42.388 Read completed with error (sct=0, sc=8) 00:26:42.388 starting I/O failed 00:26:42.388 Read completed with error (sct=0, sc=8) 00:26:42.388 starting I/O failed 00:26:42.388 Read completed with error (sct=0, sc=8) 00:26:42.388 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Write completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Write completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Write completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Write completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Write completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Write completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 [2024-07-15 14:05:37.029708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Write completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Write completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Write completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Write completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Write completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Write completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Read completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 Write completed with error (sct=0, sc=8) 00:26:42.389 starting I/O failed 00:26:42.389 [2024-07-15 14:05:37.030045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:42.389 [2024-07-15 14:05:37.030282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.389 [2024-07-15 14:05:37.030308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.389 qpair failed and we were unable to recover it. 00:26:42.389 [2024-07-15 14:05:37.030422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.389 [2024-07-15 14:05:37.030445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.389 qpair failed and we were unable to recover it. 00:26:42.389 [2024-07-15 14:05:37.030598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.389 [2024-07-15 14:05:37.030620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.389 qpair failed and we were unable to recover it. 00:26:42.389 [2024-07-15 14:05:37.030800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.389 [2024-07-15 14:05:37.030825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.389 qpair failed and we were unable to recover it. 00:26:42.389 [2024-07-15 14:05:37.030926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.389 [2024-07-15 14:05:37.030957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.389 qpair failed and we were unable to recover it. 00:26:42.389 [2024-07-15 14:05:37.031054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.389 [2024-07-15 14:05:37.031078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.389 qpair failed and we were unable to recover it. 00:26:42.389 [2024-07-15 14:05:37.031192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.389 [2024-07-15 14:05:37.031216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.389 qpair failed and we were unable to recover it. 00:26:42.389 [2024-07-15 14:05:37.031338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.389 [2024-07-15 14:05:37.031361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.389 qpair failed and we were unable to recover it. 00:26:42.389 [2024-07-15 14:05:37.031596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.389 [2024-07-15 14:05:37.031634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.389 qpair failed and we were unable to recover it. 00:26:42.389 [2024-07-15 14:05:37.031798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.389 [2024-07-15 14:05:37.031823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.389 qpair failed and we were unable to recover it. 00:26:42.389 [2024-07-15 14:05:37.031925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.389 [2024-07-15 14:05:37.031950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.389 qpair failed and we were unable to recover it. 00:26:42.389 [2024-07-15 14:05:37.032176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.389 [2024-07-15 14:05:37.032214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.389 qpair failed and we were unable to recover it. 00:26:42.389 [2024-07-15 14:05:37.032346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.389 [2024-07-15 14:05:37.032378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.389 qpair failed and we were unable to recover it. 00:26:42.389 [2024-07-15 14:05:37.032563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.389 [2024-07-15 14:05:37.032585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.389 qpair failed and we were unable to recover it. 00:26:42.389 [2024-07-15 14:05:37.032827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.389 [2024-07-15 14:05:37.032853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.389 qpair failed and we were unable to recover it. 00:26:42.389 [2024-07-15 14:05:37.032964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.389 [2024-07-15 14:05:37.032989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.389 qpair failed and we were unable to recover it. 00:26:42.389 [2024-07-15 14:05:37.033250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.389 [2024-07-15 14:05:37.033273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.389 qpair failed and we were unable to recover it. 00:26:42.389 [2024-07-15 14:05:37.033460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.389 [2024-07-15 14:05:37.033515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.389 qpair failed and we were unable to recover it. 00:26:42.389 [2024-07-15 14:05:37.033679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.389 [2024-07-15 14:05:37.033702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.389 qpair failed and we were unable to recover it. 00:26:42.389 [2024-07-15 14:05:37.033859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.389 [2024-07-15 14:05:37.033884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.389 qpair failed and we were unable to recover it. 00:26:42.389 [2024-07-15 14:05:37.033985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.389 [2024-07-15 14:05:37.034009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.389 qpair failed and we were unable to recover it. 00:26:42.389 [2024-07-15 14:05:37.034197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.389 [2024-07-15 14:05:37.034220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.389 qpair failed and we were unable to recover it. 00:26:42.389 [2024-07-15 14:05:37.034466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.389 [2024-07-15 14:05:37.034489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.389 qpair failed and we were unable to recover it. 00:26:42.389 [2024-07-15 14:05:37.034684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.389 [2024-07-15 14:05:37.034729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.034862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.034887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.034978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.035002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.035238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.035271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.035527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.035550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.035746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.035771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.035881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.035905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.036073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.036108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.036333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.036364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.036638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.036689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.036869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.036894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.037031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.037055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.037246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.037268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.037379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.037403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.037651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.037675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.037813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.037837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.037985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.038009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.038159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.038182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.038402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.038425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.038569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.038592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.038765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.038791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.038889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.038913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.039013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.039052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.039289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.039312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.039424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.039447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.039705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.039749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.039897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.039921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.040041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.040065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.040188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.040212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.040429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.040463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.040635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.040673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.040835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.040875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.040972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.040996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.041202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.041224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.041378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.041400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.041657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.041690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.041852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.041877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.042014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.042053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.042241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.042263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.042416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.042446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.390 [2024-07-15 14:05:37.042570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.390 [2024-07-15 14:05:37.042593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.390 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.042762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.042788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.042920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.042946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.043066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.043090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.043333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.043362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.043518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.043541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.043735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.043786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.043921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.043944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.044080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.044119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.044302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.044325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.044489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.044512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.044693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.044725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.044857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.044888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.045032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.045056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.045268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.045290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.045438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.045461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.045600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.045638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.045798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.045836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.045999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.046037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.046208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.046231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.046444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.046466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.046580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.046604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.046759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.046784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.046997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.047036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.047246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.047269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.047404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.047427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.047533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.047557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.047697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.047720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.047908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.047931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.048096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.048119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.048258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.048304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.048475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.048498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.048694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.048716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.048898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.048921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.049103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.049126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.049311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.049333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.049478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.049504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.049656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.049693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.049864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.049888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.050106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.050143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.391 [2024-07-15 14:05:37.050273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.391 [2024-07-15 14:05:37.050310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.391 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.050467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.050490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.050663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.050699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.050852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.050879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.051007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.051032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.051207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.051230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.051383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.051422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.051589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.051611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.051800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.051839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.052003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.052036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.052261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.052300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.052432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.052454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.052554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.052577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.052752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.052791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.052917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.052956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.053079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.053118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.053261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.053286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.053398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.053420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.053601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.053625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.053812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.053837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.053995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.054017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.054140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.054163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.054401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.054424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.054580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.054606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.054759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.054797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.054936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.054961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.055226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.055265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.055394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.055419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.055606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.055628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.055733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.055778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.055961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.056008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.056145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.056172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.056379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.056419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.056641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.056673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.056813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.056858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.057008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.057035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.057276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.057322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.057484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.057506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.057783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.057807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.057991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.392 [2024-07-15 14:05:37.058031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.392 qpair failed and we were unable to recover it. 00:26:42.392 [2024-07-15 14:05:37.058192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.058236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.058483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.058529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.058679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.058701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.058898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.058939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.059136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.059164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.059361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.059402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.059517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.059539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.059756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.059780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.059900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.059928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.060088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.060115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.060352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.060401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.060598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.060621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.060775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.060798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.060960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.060989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.061246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.061294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.061445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.061487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.061684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.061707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.061874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.061906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.062080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.062122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.062283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.062334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.062571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.062593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.062761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.062786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.062982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.063012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.063233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.063277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.063435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.063484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.063613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.063651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.063772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.063796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.063987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.064031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.064186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.064230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.064395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.064443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.064667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.064699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.393 [2024-07-15 14:05:37.064879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.393 [2024-07-15 14:05:37.064902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.393 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.065106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.065149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.065253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.065290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.065413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.065435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.065608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.065647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.065889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.065939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.066128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.066171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.066382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.066425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.066558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.066588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.066771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.066794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.066957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.067002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.067203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.067257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.067448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.067481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.067628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.067651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.067870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.067913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.068079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.068129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.068373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.068421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.068582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.068605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.068769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.068794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.068947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.068990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.069150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.069199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.069352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.069398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.069614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.069637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.069840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.069885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.069986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.070020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.070215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.070262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.070412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.070436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.070669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.070691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.070875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.070924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.071087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.071139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.071292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.071335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.071570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.071601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.071841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.071876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.072057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.072080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.072246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.072302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.072446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.072483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.072618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.072642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.072760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.072785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.073035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.394 [2024-07-15 14:05:37.073079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.394 qpair failed and we were unable to recover it. 00:26:42.394 [2024-07-15 14:05:37.073230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.073276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.073449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.073472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.073619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.073656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.073869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.073917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.074043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.074099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.074272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.074318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.074466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.074488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.074664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.074701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.074866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.074918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.075048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.075099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.075307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.075354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.075501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.075534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.075679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.075703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.075833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.075856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.076036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.076059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.076266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.076289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.076495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.076540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.076734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.076761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.076916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.076964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.077142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.077195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.077348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.077396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.077560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.077590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.077736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.077779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.077928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.077975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.078127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.078169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.078360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.078407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.078571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.078594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.078710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.078755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.078869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.078894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.079126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.079149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.079354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.079409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.079532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.079569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.079817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.079841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.080071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.080120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.080341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.080387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.080516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.080557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.080718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.080761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.080936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.080983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.081183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.395 [2024-07-15 14:05:37.081230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.395 qpair failed and we were unable to recover it. 00:26:42.395 [2024-07-15 14:05:37.081394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.081437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.081595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.081618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.081793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.081817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.081974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.081998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.082222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.082245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.082453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.082486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.082776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.082816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.082985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.083033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.083224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.083272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.083433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.083480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.083663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.083690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.083888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.083936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.084138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.084185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.084333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.084356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.084515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.084538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.084807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.084832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.085023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.085073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.085201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.085250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.085489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.085537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.085719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.085752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.085893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.085915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.086082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.086129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.086293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.086343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.086530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.086577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.086731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.086788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.086941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.086987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.087168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.087218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.087372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.087405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.087615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.087639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.087847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.087881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.088055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.088078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.088278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.088309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.088561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.088603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.088768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.088792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.088943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.088991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.089126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.089181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.089338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.089387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.089506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.089529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.396 [2024-07-15 14:05:37.089703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.396 [2024-07-15 14:05:37.089754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.396 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.089958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.089982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.090101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.090124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.090323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.090345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.090501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.090523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.090782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.090808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.090935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.090994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.091312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.091362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.091527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.091576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.091717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.091753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.091952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.091976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.092183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.092236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.092383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.092433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.092644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.092667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.092860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.092883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.093077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.093126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.093272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.093304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.093499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.093521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.093667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.093689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.093818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.093842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.093963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.093986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.094241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.094290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.094476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.094522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.094662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.094684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.094899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.094923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.095074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.095129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.095306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.095332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.095506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.095528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.095723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.095777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.095942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.095993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.096146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.096187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.096374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.096424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.096588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.096610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.096755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.096780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.096936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.397 [2024-07-15 14:05:37.096987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.397 qpair failed and we were unable to recover it. 00:26:42.397 [2024-07-15 14:05:37.097170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.398 [2024-07-15 14:05:37.097220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.398 qpair failed and we were unable to recover it. 00:26:42.398 [2024-07-15 14:05:37.097387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.398 [2024-07-15 14:05:37.097441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.398 qpair failed and we were unable to recover it. 00:26:42.398 [2024-07-15 14:05:37.097602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.398 [2024-07-15 14:05:37.097625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.398 qpair failed and we were unable to recover it. 00:26:42.398 [2024-07-15 14:05:37.097842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.398 [2024-07-15 14:05:37.097891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.398 qpair failed and we were unable to recover it. 00:26:42.398 [2024-07-15 14:05:37.098052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.398 [2024-07-15 14:05:37.098110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.398 qpair failed and we were unable to recover it. 00:26:42.398 [2024-07-15 14:05:37.098251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.398 [2024-07-15 14:05:37.098303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.398 qpair failed and we were unable to recover it. 00:26:42.398 [2024-07-15 14:05:37.098476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.398 [2024-07-15 14:05:37.098498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.398 qpair failed and we were unable to recover it. 00:26:42.398 [2024-07-15 14:05:37.098628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.398 [2024-07-15 14:05:37.098665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.398 qpair failed and we were unable to recover it. 00:26:42.398 [2024-07-15 14:05:37.098868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.398 [2024-07-15 14:05:37.098902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.398 qpair failed and we were unable to recover it. 00:26:42.398 [2024-07-15 14:05:37.099118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.398 [2024-07-15 14:05:37.099150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.398 qpair failed and we were unable to recover it. 00:26:42.398 [2024-07-15 14:05:37.099305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.398 [2024-07-15 14:05:37.099355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.398 qpair failed and we were unable to recover it. 00:26:42.398 [2024-07-15 14:05:37.099511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.398 [2024-07-15 14:05:37.099533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.398 qpair failed and we were unable to recover it. 00:26:42.398 [2024-07-15 14:05:37.099686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.398 [2024-07-15 14:05:37.099723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.398 qpair failed and we were unable to recover it. 00:26:42.398 [2024-07-15 14:05:37.099912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.398 [2024-07-15 14:05:37.099967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.398 qpair failed and we were unable to recover it. 00:26:42.398 [2024-07-15 14:05:37.100103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.398 [2024-07-15 14:05:37.100151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.398 qpair failed and we were unable to recover it. 00:26:42.398 Read completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Read completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Read completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Read completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Read completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Read completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Read completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Read completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Read completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Read completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Read completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Read completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Write completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Write completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Read completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Read completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Read completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Write completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Write completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Write completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Write completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Read completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Write completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Read completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Read completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Write completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Read completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Write completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Read completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Read completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Write completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 Write completed with error (sct=0, sc=8) 00:26:42.398 starting I/O failed 00:26:42.398 [2024-07-15 14:05:37.100826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:42.398 [2024-07-15 14:05:37.101084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.398 [2024-07-15 14:05:37.101196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.398 qpair failed and we were unable to recover it. 00:26:42.398 [2024-07-15 14:05:37.101473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.398 [2024-07-15 14:05:37.101536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.398 qpair failed and we were unable to recover it. 00:26:42.398 [2024-07-15 14:05:37.101761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.398 [2024-07-15 14:05:37.101809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.398 qpair failed and we were unable to recover it. 00:26:42.398 [2024-07-15 14:05:37.101965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.398 [2024-07-15 14:05:37.101991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.398 qpair failed and we were unable to recover it. 00:26:42.398 [2024-07-15 14:05:37.102173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.398 [2024-07-15 14:05:37.102230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.398 qpair failed and we were unable to recover it. 00:26:42.398 [2024-07-15 14:05:37.102447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.398 [2024-07-15 14:05:37.102506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.398 qpair failed and we were unable to recover it. 00:26:42.398 [2024-07-15 14:05:37.102776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.398 [2024-07-15 14:05:37.102829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.398 qpair failed and we were unable to recover it. 00:26:42.398 [2024-07-15 14:05:37.102965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.398 [2024-07-15 14:05:37.102993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.398 qpair failed and we were unable to recover it. 00:26:42.398 [2024-07-15 14:05:37.103195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.398 [2024-07-15 14:05:37.103253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.398 qpair failed and we were unable to recover it. 00:26:42.398 [2024-07-15 14:05:37.103497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.398 [2024-07-15 14:05:37.103556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.398 qpair failed and we were unable to recover it. 00:26:42.398 [2024-07-15 14:05:37.103854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.398 [2024-07-15 14:05:37.103880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.398 qpair failed and we were unable to recover it. 00:26:42.398 [2024-07-15 14:05:37.104048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.398 [2024-07-15 14:05:37.104107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.104420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.104479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.104680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.104703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.104864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.104890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.105110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.105168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.105375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.105433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.105829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.105855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.105986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.106010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.106286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.106310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.106476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.106535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.106722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.106805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.106942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.106968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.107195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.107280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.107524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.107581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.107813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.107839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.107976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.108001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.108256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.108314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.108572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.108631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.108828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.108854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.108991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.109015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.109319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.109342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.109501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.109569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.109838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.109863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.110050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.110073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.110349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.110407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.110659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.110717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.110899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.110925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.111122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.111191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.111473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.111531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.111782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.111824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.111961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.111986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.112224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.112282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.112487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.112545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.112810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.112835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.113014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.113052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.113232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.113256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.113463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.113521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.113766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.113815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.113969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.399 [2024-07-15 14:05:37.113993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.399 qpair failed and we were unable to recover it. 00:26:42.399 [2024-07-15 14:05:37.114147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.114206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.114515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.114572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.114802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.114842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.114952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.114976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.115142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.115200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.115426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.115484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.115801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.115825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.116058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.116117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.116417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.116475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.116662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.116720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.116916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.116940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.117126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.117164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.117418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.117475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.117809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.117839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.117992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.118015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.118201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.118252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.118516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.118575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.118798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.118847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.118984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.119033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.119239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.119298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.119606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.119664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.119871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.119896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.120100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.120158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.120357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.120397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.120626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.120689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.120964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.121005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.121169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.121208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.121446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.121504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.121713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.121787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.122007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.122047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.122226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.122284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.122452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.122510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.122760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.122802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.122993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.123061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.123258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.123317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.123520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.123562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.123806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.123868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.124010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.124052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.400 [2024-07-15 14:05:37.124234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.400 [2024-07-15 14:05:37.124276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.400 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.124512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.124571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.124783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.124848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.125066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.125108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.125263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.125321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.125521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.125579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.125831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.125876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.126079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.126139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.126397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.126455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.126692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.126750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.126987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.127051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.127227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.127286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.127501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.127546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.127814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.127861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.128121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.128181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.128466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.128521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.128753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.128825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.129117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.129176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.129442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.129489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.129700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.129773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.129985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.130033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.130254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.130301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.130561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.130620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.130985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.131034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.131373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.131421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.131707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.131779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.132102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.132160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.132431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.132482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.132792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.132844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.133076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.133151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.133499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.133550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.133807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.133867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.134099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.134174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.134503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.134554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.134788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.134848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.135087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.135146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.135369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.135423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.135636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.135694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.136018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.136073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.136405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.401 [2024-07-15 14:05:37.136458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.401 qpair failed and we were unable to recover it. 00:26:42.401 [2024-07-15 14:05:37.136832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.136887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.137087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.137146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.137397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.137452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.137684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.137755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.138023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.138095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.138329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.138383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.138584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.138642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.138971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.139026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.139259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.139313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.139602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.139660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.139955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.140010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.140257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.140316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.140644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.140703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.141058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.141117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.141287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.141345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.141557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.141625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.141826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.141885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.142098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.142157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.142495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.142571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.142791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.142851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.143094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.143153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.143399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.143456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.143714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.143797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.144037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.144114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.144449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.144524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.144859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.144935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.145254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.145337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.145634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.145691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.145954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.146030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.146309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.146384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.146629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.146688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.146959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.147035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.147332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.147407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.402 [2024-07-15 14:05:37.147597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.402 [2024-07-15 14:05:37.147656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.402 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.147889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.147965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.148219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.148294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.148611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.148668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.148931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.149009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.149234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.149309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.149524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.149581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.149779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.149857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.150215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.150291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.150603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.150660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.151048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.151124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.151373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.151431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.151704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.151793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.152084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.152159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.152407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.152482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.152671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.152729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.153063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.153139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.153439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.153513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.153712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.153787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.154084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.154160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.154398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.154473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.154805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.154865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.155184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.155270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.155567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.155640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.155886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.155944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.156325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.156398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.156630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.156686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.156884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.156960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.157198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.157271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.157561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.157635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.157981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.158058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.158294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.158368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.158623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.158680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.158932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.159001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.159253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.159309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.159486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.159542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.159786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.159844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.160065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.160121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.160316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.160372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.160604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.160660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.160954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.403 [2024-07-15 14:05:37.161012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.403 qpair failed and we were unable to recover it. 00:26:42.403 [2024-07-15 14:05:37.161215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.161271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.161459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.161514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.161769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.161830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.162071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.162129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.162296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.162355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.162595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.162654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.162921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.163000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.163218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.163297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.163548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.163608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.163833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.163914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.164165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.164241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.164496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.164571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.164881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.164958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.165147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.165206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.165429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.165487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.165754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.165809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.165997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.166075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.166278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.166355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.166667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.166726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.166977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.167054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.167335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.167393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.167658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.167730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.168055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.168133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.168468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.168544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.168736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.168813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.168988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.169047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.169249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.169307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.169560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.169618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.169789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.169855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.170078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.170157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.170457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.170533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.170820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.170900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.171109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.171168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.171325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.171384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.171546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.171604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.171844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.171905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.172140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.172220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.172509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.172567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.172818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.172907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.404 [2024-07-15 14:05:37.173257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.404 [2024-07-15 14:05:37.173337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.404 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.173577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.173635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.173884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.173963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.174150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.174209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.174437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.174496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.174669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.174727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.175030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.175114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.175343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.175422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.175697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.175774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.176038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.176096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.176254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.176313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.176529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.176587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.176889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.176968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.177200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.177286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.177496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.177555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.177805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.177886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.178203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.178280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.178514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.178573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.178810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.178891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.179248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.179307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.179643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.179701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.180039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.180114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.180426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.180509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.180792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.180853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.181106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.181183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.181419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.181495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.181765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.181826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.182004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.182063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.182274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.182352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.182573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.182649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.182834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.182921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.183152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.183210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.183524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.183611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.183850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.183928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.184220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.184297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.184500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.184559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.184838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.184915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.185145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.185204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.185407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.185484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.185716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.405 [2024-07-15 14:05:37.185786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.405 qpair failed and we were unable to recover it. 00:26:42.405 [2024-07-15 14:05:37.185987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.186046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.186263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.186323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.186571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.186629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.186847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.186907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.187118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.187177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.187366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.187424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.187704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.187793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.188005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.188064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.188234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.188294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.188503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.188563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.188797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.188858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.189068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.189126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.189383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.189443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.189671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.189729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.189994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.190054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.190293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.190351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.190558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.190618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.190824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.190884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.191088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.191113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.191310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.191369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.191596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.191656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.191881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.191960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.192177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.192245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.192423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.192483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.192754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.192813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.193100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.193159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.193349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.193407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.193644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.193703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.194025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.194102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.194341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.194417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.194653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.194712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.194979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.195058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.195268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.195343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.195597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.195655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.195897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.406 [2024-07-15 14:05:37.195976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.406 qpair failed and we were unable to recover it. 00:26:42.406 [2024-07-15 14:05:37.196184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.196260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.196461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.196519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.196717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.196794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.196999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.197057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.197413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.197476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.197687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.197761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.197964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.198042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.198245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.198304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.198608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.198666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.198941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.199020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.199273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.199351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.199561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.199619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.199920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.199998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.200258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.200337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.200577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.200637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.200903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.200937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.201156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.201233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.201467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.201526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.201777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.201838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.202081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.202158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.202377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.202435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.202652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.202711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.202920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.202998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.203232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.203309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.203544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.203604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.203793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.203855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.204163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.204241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.204484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.204552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.204764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.204824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.205008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.205098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.205301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.205377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.205675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.205734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.205983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.206041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.206273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.206350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.206549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.206607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.206839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.206917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.207139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.207198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.207414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.207474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.207847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.207932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.208194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.407 [2024-07-15 14:05:37.208271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.407 qpair failed and we were unable to recover it. 00:26:42.407 [2024-07-15 14:05:37.208492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.408 [2024-07-15 14:05:37.208567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.408 qpair failed and we were unable to recover it. 00:26:42.408 [2024-07-15 14:05:37.208884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.408 [2024-07-15 14:05:37.208944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.408 qpair failed and we were unable to recover it. 00:26:42.408 [2024-07-15 14:05:37.209268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.408 [2024-07-15 14:05:37.209327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.408 qpair failed and we were unable to recover it. 00:26:42.408 [2024-07-15 14:05:37.209527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.408 [2024-07-15 14:05:37.209586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.408 qpair failed and we were unable to recover it. 00:26:42.408 [2024-07-15 14:05:37.209791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.408 [2024-07-15 14:05:37.209853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.408 qpair failed and we were unable to recover it. 00:26:42.408 [2024-07-15 14:05:37.210073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.408 [2024-07-15 14:05:37.210150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.408 qpair failed and we were unable to recover it. 00:26:42.408 [2024-07-15 14:05:37.210352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.408 [2024-07-15 14:05:37.210428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.408 qpair failed and we were unable to recover it. 00:26:42.408 [2024-07-15 14:05:37.210618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.408 [2024-07-15 14:05:37.210677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.408 qpair failed and we were unable to recover it. 00:26:42.408 [2024-07-15 14:05:37.211010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.408 [2024-07-15 14:05:37.211106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.408 qpair failed and we were unable to recover it. 00:26:42.408 [2024-07-15 14:05:37.211469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.408 [2024-07-15 14:05:37.211529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.408 qpair failed and we were unable to recover it. 00:26:42.408 [2024-07-15 14:05:37.211717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.408 [2024-07-15 14:05:37.211834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.408 qpair failed and we were unable to recover it. 00:26:42.408 [2024-07-15 14:05:37.212059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.408 [2024-07-15 14:05:37.212119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.408 qpair failed and we were unable to recover it. 00:26:42.408 [2024-07-15 14:05:37.212320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.408 [2024-07-15 14:05:37.212406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.408 qpair failed and we were unable to recover it. 00:26:42.408 [2024-07-15 14:05:37.212654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.408 [2024-07-15 14:05:37.212715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.408 qpair failed and we were unable to recover it. 00:26:42.408 [2024-07-15 14:05:37.212968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.408 [2024-07-15 14:05:37.213046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.408 qpair failed and we were unable to recover it. 00:26:42.408 [2024-07-15 14:05:37.213250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.408 [2024-07-15 14:05:37.213326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.408 qpair failed and we were unable to recover it. 00:26:42.408 [2024-07-15 14:05:37.213546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.408 [2024-07-15 14:05:37.213605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.408 qpair failed and we were unable to recover it. 00:26:42.408 [2024-07-15 14:05:37.213831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.408 [2024-07-15 14:05:37.213911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.408 qpair failed and we were unable to recover it. 00:26:42.408 [2024-07-15 14:05:37.214223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 14:05:37.214299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 14:05:37.214517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 14:05:37.214577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 14:05:37.214824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 14:05:37.214902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 14:05:37.215155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 14:05:37.215233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 14:05:37.215450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 14:05:37.215514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 14:05:37.215723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 14:05:37.215796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 14:05:37.216008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 14:05:37.216077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 14:05:37.216316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 14:05:37.216376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 14:05:37.216656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 14:05:37.216721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 14:05:37.217065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.678 [2024-07-15 14:05:37.217137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.678 qpair failed and we were unable to recover it. 00:26:42.678 [2024-07-15 14:05:37.217506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.217573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.217895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.217974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.218284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.218361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.218579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.218638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.218921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.218998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.219230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.219306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.219515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.219573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.219765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.219826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.220053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.220112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.220412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.220488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.220801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.220862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.221161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.221237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.221548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.221624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.221887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.221966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.222219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.222294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.222578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.222656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.222947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.223027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.223216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.223291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.223555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.223614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.223860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.223939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.224222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.224297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.224543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.224602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.224909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.224986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.225303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.225378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.225609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.225669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.225936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.226012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.226287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.226365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.226554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.226613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.226928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.227015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.227267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.227343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.227555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.227615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.227796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.227880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.228186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.228262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.228559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.228618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.228841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.679 [2024-07-15 14:05:37.228918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.679 qpair failed and we were unable to recover it. 00:26:42.679 [2024-07-15 14:05:37.229105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.229182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.229441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.229515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.229851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.229928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.230168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.230245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.230478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.230554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.230888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.230971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.231206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.231283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.231575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.231634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.231846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.231925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.232112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.232191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.232396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.232471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.232788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.232848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.233162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.233239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.233467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.233544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.233883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.233961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.234269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.234344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.234626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.234684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.234909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.234987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.235302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.235378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.235691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.235762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.236077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.236155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.236463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.236540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.236828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.236908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.237141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.237218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.237479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.237539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.237750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.237809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.238031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.238108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.238344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.238403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.238683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.238754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.238942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.239002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.239239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.239297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.239593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.239661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.239955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.240015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.240336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.240411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.240687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.240777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.240997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.241074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.241371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.241448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.241677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.241736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.242090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.242172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.242477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.680 [2024-07-15 14:05:37.242554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.680 qpair failed and we were unable to recover it. 00:26:42.680 [2024-07-15 14:05:37.242815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.242893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.243230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.243307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.243565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.243642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.243946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.244023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.244361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.244439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.244671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.244729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.244973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.245033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.245389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.245464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.245780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.245839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.246028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.246107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.246421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.246497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.246861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.246921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.247152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.247229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.247427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.247504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.247772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.247832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.248140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.248216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.248422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.248499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.248857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.248917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.249115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.249193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.249444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.249520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.249864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.249925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.250191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.250258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.250487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.250565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.250759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.250820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.251049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.251133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.251398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.251474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.251718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.251797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.252058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.252136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.252392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.252467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.252807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.252869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.253099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.253159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.253411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.253496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.253807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.253886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.254118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.254193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.254491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.254550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.681 [2024-07-15 14:05:37.254773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.681 [2024-07-15 14:05:37.254834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.681 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.255085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.255161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.255405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.255482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.255674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.255733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.255986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.256064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.256331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.256408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.256733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.256825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.257084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.257160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.257375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.257435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.257633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.257692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.257964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.258040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.258325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.258401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.258716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.258791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.259107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.259184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.259426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.259502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.259790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.259851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.260181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.260257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.260460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.260536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.260822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.260903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.261208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.261284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.261512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.261588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.261841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.261919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.262260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.262335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.262703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.262775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.263017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.263096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.263326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.263402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.263574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.263634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.263950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.264037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.264255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.264314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.264563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.264622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.264956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.265033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.265373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.265449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.265681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.265752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.266046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.266123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.266371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.266448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.266755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.266816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.267183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.267277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.267574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.267650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.267933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.267996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.682 [2024-07-15 14:05:37.268239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.682 [2024-07-15 14:05:37.268315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.682 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.268548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.268607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.268802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.268863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.269052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.269130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.269370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.269447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.269710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.269784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.270155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.270230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.270518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.270594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.270966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.271043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.271354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.271430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.271707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.271778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.272020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.272096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.272309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.272386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.272577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.272636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.272968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.273050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.273361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.273438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.273752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.273812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.274135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.274212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.274436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.274512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.274767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.274828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.275070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.275129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.275428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.275504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.275721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.275804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.276045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.276105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.276377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.276435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.276779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.276841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.277073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.277131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.277511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.277588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.277825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.277883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.278106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.278181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.278458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.683 [2024-07-15 14:05:37.278534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.683 qpair failed and we were unable to recover it. 00:26:42.683 [2024-07-15 14:05:37.278899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.278960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.279288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.279369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.279734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.279806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.280004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.280079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.280266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.280348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.280602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.280679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.280927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.281014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.281250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.281326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.281544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.281604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.281812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.281873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.282070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.282154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.282384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.282461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.282685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.282757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.282947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.283028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.283229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.283306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.283508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.283567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.283787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.283848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.284081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.284141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.284374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.284433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.284794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.284855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.285094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.285155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.285356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.285416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.285633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.285692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.285941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.286002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.286214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.286292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.286543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.286622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.286811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.286893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.287190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.287274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.287497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.287557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.287774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.287835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.288028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.288108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.288327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.288404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.288686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.288758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.289007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.289067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.289339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.289398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.289589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.289648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.289852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.289913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.290109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.290169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.290355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.290414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.684 [2024-07-15 14:05:37.290624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.684 [2024-07-15 14:05:37.290684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.684 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.290911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.290972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.291255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.291315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.291558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.291616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.291858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.291936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.292157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.292235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.292483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.292543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.292755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.292825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.293080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.293159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.293477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.293553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.293766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.293827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.294078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.294154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.294376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.294452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.294624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.294694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.295049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.295119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.295308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.295386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.295589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.295648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.295904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.295983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.296298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.296376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.296610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.296669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.296892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.296973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.297261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.297339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.297574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.297634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.297847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.297926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.298101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.298126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.298291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.298370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.298545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.298604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.298856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.298918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.299143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.299202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.299396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.299456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.299681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.299754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.299969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.300028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.300301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.300378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.300591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.300650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.300918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.300995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.301248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.301307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.301494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.301553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.301776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.301836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.302081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.302141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.302384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.302460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.302788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.685 [2024-07-15 14:05:37.302860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.685 qpair failed and we were unable to recover it. 00:26:42.685 [2024-07-15 14:05:37.303183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.303258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.303434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.303512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.303722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.303797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.303996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.304076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.304320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.304396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.304749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.304809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.305224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.305292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.305544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.305603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.305808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.305873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.306125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.306203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.306418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.306495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.306799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.306860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.307083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.307160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.307405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.307474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.307664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.307722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.307983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.308060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.308302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.308378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.308612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.308672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.308941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.309019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.309217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.309292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.309586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.309645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.309864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.309925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.310191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.310250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.310442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.310520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.310792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.310854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.311129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.311206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.311534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.311612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.311830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.311891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.312133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.312210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.312478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.312555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.312872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.312959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.313194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.313270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.313593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.313675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.314036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.314120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.314347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.314424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.314704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.314780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.315175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.315235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.315509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.315586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.315778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.315839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.316094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.316171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.686 [2024-07-15 14:05:37.316417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.686 [2024-07-15 14:05:37.316493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.686 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.316751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.316813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.317135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.317212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.317549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.317625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.317996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.318057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.318383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.318461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.318714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.318796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.319015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.319074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.319274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.319351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.319578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.319654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.319950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.320037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.320307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.320383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.320609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.320668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.320904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.320979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.321256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.321332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.321516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.321582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.321774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.321835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.322089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.322166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.322477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.322553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.322767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.322828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.323053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.323105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.323320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.323395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.323730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.323822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.324147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.324227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.324436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.324524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.324767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.324848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.325170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.325248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.325499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.325575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.325859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.325939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.326201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.326279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.326491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.326551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.326750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.326811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.327031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.327091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.327311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.327372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.327585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.687 [2024-07-15 14:05:37.327644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.687 qpair failed and we were unable to recover it. 00:26:42.687 [2024-07-15 14:05:37.327912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.327972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.328326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.328411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.328613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.328673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.328937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.328998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.329204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.329282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.329508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.329568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.329780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.329842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.330098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.330158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.330391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.330451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.330669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.330729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.331110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.331170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.331405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.331472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.331842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.331903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.332250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.332326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.332581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.332656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.332904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.332982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.333220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.333280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.333536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.333613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.333855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.333934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.334272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.334349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.334599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.334658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.334908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.334985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.335195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.335271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.335486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.335553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.335777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.335838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.336092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.336167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.336441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.336517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.336770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.336831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.337036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.337113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.337300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.337377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.337688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.337768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.337954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.338033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.338260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.338337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.338534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.338594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.338841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.338919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.339292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.339351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.339792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.339852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.340094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.340153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.688 [2024-07-15 14:05:37.340382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.688 [2024-07-15 14:05:37.340459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.688 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.340706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.340787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.340984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.341060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.341313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.341390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.341670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.341729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.341981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.342057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.342451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.342511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.342713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.342789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.343060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.343138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.343473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.343550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.343837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.343898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.344133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.344192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.344440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.344516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.344749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.344819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.345207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.345277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.345566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.345643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.345957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.346018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.346284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.346343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.346577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.346637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.346960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.347039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.347319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.347405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.347642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.347702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.347920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.347998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.348307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.348386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.348695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.348769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.348983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.349058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.349290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.349365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.349672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.349758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.350034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.350110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.350423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.350502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.350835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.350897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.351128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.351193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.351542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.351620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.351936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.351997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.352219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.352278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.352528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.352604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.352984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.353045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.353305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.353380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.353613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.353672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.689 [2024-07-15 14:05:37.353930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.689 [2024-07-15 14:05:37.353991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.689 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.354250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.354327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.354701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.354776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.355155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.355226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.355526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.355602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.355862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.355924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.356180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.356256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.356540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.356617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.356887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.356965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.357235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.357311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.357538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.357597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.357807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.357902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.358090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.358169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.358428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.358503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.358784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.358853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.359061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.359136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.359373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.359450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.359714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.359806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.360046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.360121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.360342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.360419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.360644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.360703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.361038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.361116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.361480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.361558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.361791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.361852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.362202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.362284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.362543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.362620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.362899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.362976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.363229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.363305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.363539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.363608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.363871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.363948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.364181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.364258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.364494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.364553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.364730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.364811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.365007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.365085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.365298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.365374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.365599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.365658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.366067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.366137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.366363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.366438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.366670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.690 [2024-07-15 14:05:37.366729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.690 qpair failed and we were unable to recover it. 00:26:42.690 [2024-07-15 14:05:37.367017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.367077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.367297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.367362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.367625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.367685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.367953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.368031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.368244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.368320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.368572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.368631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.368923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.369002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.369312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.369394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.369762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.369822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.370140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.370215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.370465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.370541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.370856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.370945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.371166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.371243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.371472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.371549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.371812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.371892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.372222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.372290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.372520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.372580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.372910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.372972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.373287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.373362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.373549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.373608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.373819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.373898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.374127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.374203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.374387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.374447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.374668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.374726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.375070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.375148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.375477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.375564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.375830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.375910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.376241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.376317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.376631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.376690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.377016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.377103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.377373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.377450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.377810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.377873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.378196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.378281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.378503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.378580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.378816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.378895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.379123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.379200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.379448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.379525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.379772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.379850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.380106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.691 [2024-07-15 14:05:37.380182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.691 qpair failed and we were unable to recover it. 00:26:42.691 [2024-07-15 14:05:37.380438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.380515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.380860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.380919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.381131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.381209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.381527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.381604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.381916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.381993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.382250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.382327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.382561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.382632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.382955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.383033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.383237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.383313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.383519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.383578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.383960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.384054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.384282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.384359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.384543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.384602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.384852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.384913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.385113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.385173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.385369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.385427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.385693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.385775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.386053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.386130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.386320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.386397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.386610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.386670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.386902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.386980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.387262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.387321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.387652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.387711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.388067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.388144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.388373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.388449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.388665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.388724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.389005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.389091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.389446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.389520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.389894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.389981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.390303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.390378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.390568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.390627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.390815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.390896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.391156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.391232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.391477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.692 [2024-07-15 14:05:37.391537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.692 qpair failed and we were unable to recover it. 00:26:42.692 [2024-07-15 14:05:37.391820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.391901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.392184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.392261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.392516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.392576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.392774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.392835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.393011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.393088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.393330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.393408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.393644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.393703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.393906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.393984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.394216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.394291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.394515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.394575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.394793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.394855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.395162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.395239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.395493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.395552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.395793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.395854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.396053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.396129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.396365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.396423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.396664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.396723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.396997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.397073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.397303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.397379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.397603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.397662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.397930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.398008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.398286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.398361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.398752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.398812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.399089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.399165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.399477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.399563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.399821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.399901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.400174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.400251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.400570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.400656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.400915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.400993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.401258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.401335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.401623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.401682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.402014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.402091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.402388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.402463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.402699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.402769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.403008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.403085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.403282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.403358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.693 [2024-07-15 14:05:37.403572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.693 [2024-07-15 14:05:37.403632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.693 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.403873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.403951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.404141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.404217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.404414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.404474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.404755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.404817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.405074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.405133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.405466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.405525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.405785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.405847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.406046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.406123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.406375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.406451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.406631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.406694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.406885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.406964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.407160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.407238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.407552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.407636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.407935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.408012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.408233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.408310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.408551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.408610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.408842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.408922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.409129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.409206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.409468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.409526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.409766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.409827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.410149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.410225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.410501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.410587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.410924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.411002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.411301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.411388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.411623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.411682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.411926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.412004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.412258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.412333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.412648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.412706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.413013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.413090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.413310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.413394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.413678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.413798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.414013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.414075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.414313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.414391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.414621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.414681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.414915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.414992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.415252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.415329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.415509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.694 [2024-07-15 14:05:37.415570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.694 qpair failed and we were unable to recover it. 00:26:42.694 [2024-07-15 14:05:37.415828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.415888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.416069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.416129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.416387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.416447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.416695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.416765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.417051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.417110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.417304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.417380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.417634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.417693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.417915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.417992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.418270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.418346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.418689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.418763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.418999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.419075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.419263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.419346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.419599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.419659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.419922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.420000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.420238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.420314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.420627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.420695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.420935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.421011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.421236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.421313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.421693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.421772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.422003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.422081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.422332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.422410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.422650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.422709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.422949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.423026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.423254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.423331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.423544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.423603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.423853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.423932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.424148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.424225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.424450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.424537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.424833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.424912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.425175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.425252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.425546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.425606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.425859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.425936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.426206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.426266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.426565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.426625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.426942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.427020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.427246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.427323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.427631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.427692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.427967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.428052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.428323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.428400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.428729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.428807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.429021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.695 [2024-07-15 14:05:37.429098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.695 qpair failed and we were unable to recover it. 00:26:42.695 [2024-07-15 14:05:37.429367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.429444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.429775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.429837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.430119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.430197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.430512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.430589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.430906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.430967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.431277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.431355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.431657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.431733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.431993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.432069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.432356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.432433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.432753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.432813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.433052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.433116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.433440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.433517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.433804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.433864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.434179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.434255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.434581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.434668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.434909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.434968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.435251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.435329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.435645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.435722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.435966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.436025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.436303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.436381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.436689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.436760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.437014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.437091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.437381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.437458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.437724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.437792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.438143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.438202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.438468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.438544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.438808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.438868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.439178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.439255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.439555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.439634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.439911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.439990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.440276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.440352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.440623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.440683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.440949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.441027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.441346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.441423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.441727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.696 [2024-07-15 14:05:37.441812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.696 qpair failed and we were unable to recover it. 00:26:42.696 [2024-07-15 14:05:37.442057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.442135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.442463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.442539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.442850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.442912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.443203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.443281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.443599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.443677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.444009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.444086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.444364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.444440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.444764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.444825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.445159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.445237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.445551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.445627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.445892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.445954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.446234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.446311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.446536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.446613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.446824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.446886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.447173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.447252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.447547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.447623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.447902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.447980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.448261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.448338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.448659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.448718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.449057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.449143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.449447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.449523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.449813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.449875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.450152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.450228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.450531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.450607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.450888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.450966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.451292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.451370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.451642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.451702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.452038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.452120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.452447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.452524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.452792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.452871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.453184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.453261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.453541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.453618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.453913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.453991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.454317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.454394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.454716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.454794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.455105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.455183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.455503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.455580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.455885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.455947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.456233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.456311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.456634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.456710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.457005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.697 [2024-07-15 14:05:37.457081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.697 qpair failed and we were unable to recover it. 00:26:42.697 [2024-07-15 14:05:37.457309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.457369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.457678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.457749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.458017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.458094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.458300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.458376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.458683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.458756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.459039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.459116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.459403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.459480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.459804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.459867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.460184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.460261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.460572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.460649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.460963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.461025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.461299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.461375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.461634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.461694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.462062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.462145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.462466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.462543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.462833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.462895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.463173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.463249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.463568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.463643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.463995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.464083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.464365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.464440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.464759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.464821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.465160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.465236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.465520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.465596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.465831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.465893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.466211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.466287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.466580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.466656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.466990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.467051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.467297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.467372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.467604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.467664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.467976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.468054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.468376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.468453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.468716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.468792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.469102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.469178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.469470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.469547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.469794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.469855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.470167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.470244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.470564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.470641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.470940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.471019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.471338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.471414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.471751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.471813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.698 [2024-07-15 14:05:37.472130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.698 [2024-07-15 14:05:37.472213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.698 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.472453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.472530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.472772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.472834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.473175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.473251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.473532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.473608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.473947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.474009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.474325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.474402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.474711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.474789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.475068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.475144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.475474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.475551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.475863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.475924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.476246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.476321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.476597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.476674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.476951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.477028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.477351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.477429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.477703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.477779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.477991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.478067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.478440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.478517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.478836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.478908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.479197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.479275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.479555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.479631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.479920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.479982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.480248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.480323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.480695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.480787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.481083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.481159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.481448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.481525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.481788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.481850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.482128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.482204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.482477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.482554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.482874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.482955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.483271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.483347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.483626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.483687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.484011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.484089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.484354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.484431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.484759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.484820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.485134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.485212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.485483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.485561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.485831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.485892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.486222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.486299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.486578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.486656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.486944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.487021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.699 [2024-07-15 14:05:37.487265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.699 [2024-07-15 14:05:37.487340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.699 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.487691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.487767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.488078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.488154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.488423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.488499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.488822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.488903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.489216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.489293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.489604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.489681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.489960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.490020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.490291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.490367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.490646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.490705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.491060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.491120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.491431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.491508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.491783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.491844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.492137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.492216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.492501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.492579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.492859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.492922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.493148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.493225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.493475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.493560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.493817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.493896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.494225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.494311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.494631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.494708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.495046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.495136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.495453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.495528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.495842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.495904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.496228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.496305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.496627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.496703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.497048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.497127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.497448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.497524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.497812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.497875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.498152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.498228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.498561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.498636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.498945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.499024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.499303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.499379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.499642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.499702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.499965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.500043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.500308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.500385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.500707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.500779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.700 [2024-07-15 14:05:37.501080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.700 [2024-07-15 14:05:37.501158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.700 qpair failed and we were unable to recover it. 00:26:42.701 [2024-07-15 14:05:37.501434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.701 [2024-07-15 14:05:37.501511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.701 qpair failed and we were unable to recover it. 00:26:42.701 [2024-07-15 14:05:37.501825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.701 [2024-07-15 14:05:37.501903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.701 qpair failed and we were unable to recover it. 00:26:42.701 [2024-07-15 14:05:37.502166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.701 [2024-07-15 14:05:37.502243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.701 qpair failed and we were unable to recover it. 00:26:42.701 [2024-07-15 14:05:37.502620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.701 [2024-07-15 14:05:37.502697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.701 qpair failed and we were unable to recover it. 00:26:42.701 [2024-07-15 14:05:37.503011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.701 [2024-07-15 14:05:37.503071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.701 qpair failed and we were unable to recover it. 00:26:42.701 [2024-07-15 14:05:37.503310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.701 [2024-07-15 14:05:37.503387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.701 qpair failed and we were unable to recover it. 00:26:42.701 [2024-07-15 14:05:37.503641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.701 [2024-07-15 14:05:37.503700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.701 qpair failed and we were unable to recover it. 00:26:42.701 [2024-07-15 14:05:37.504035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.701 [2024-07-15 14:05:37.504124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.701 qpair failed and we were unable to recover it. 00:26:42.701 [2024-07-15 14:05:37.504403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.701 [2024-07-15 14:05:37.504478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.701 qpair failed and we were unable to recover it. 00:26:42.701 [2024-07-15 14:05:37.504789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.701 [2024-07-15 14:05:37.504850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.701 qpair failed and we were unable to recover it. 00:26:42.701 [2024-07-15 14:05:37.505142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.701 [2024-07-15 14:05:37.505226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.701 qpair failed and we were unable to recover it. 00:26:42.701 [2024-07-15 14:05:37.505560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.701 [2024-07-15 14:05:37.505638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.701 qpair failed and we were unable to recover it. 00:26:42.701 [2024-07-15 14:05:37.505963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.701 [2024-07-15 14:05:37.506023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.701 qpair failed and we were unable to recover it. 00:26:42.701 [2024-07-15 14:05:37.506310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.701 [2024-07-15 14:05:37.506390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.701 qpair failed and we were unable to recover it. 00:26:42.701 [2024-07-15 14:05:37.506706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.701 [2024-07-15 14:05:37.506784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.701 qpair failed and we were unable to recover it. 00:26:42.701 [2024-07-15 14:05:37.507087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.701 [2024-07-15 14:05:37.507169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.701 qpair failed and we were unable to recover it. 00:26:42.701 [2024-07-15 14:05:37.507462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.701 [2024-07-15 14:05:37.507539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.701 qpair failed and we were unable to recover it. 00:26:42.701 [2024-07-15 14:05:37.507870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.701 [2024-07-15 14:05:37.507933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.701 qpair failed and we were unable to recover it. 00:26:42.976 [2024-07-15 14:05:37.508212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.976 [2024-07-15 14:05:37.508290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.976 qpair failed and we were unable to recover it. 00:26:42.976 [2024-07-15 14:05:37.508606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.976 [2024-07-15 14:05:37.508692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.976 qpair failed and we were unable to recover it. 00:26:42.976 [2024-07-15 14:05:37.509029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.976 [2024-07-15 14:05:37.509110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.976 qpair failed and we were unable to recover it. 00:26:42.976 [2024-07-15 14:05:37.509399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.976 [2024-07-15 14:05:37.509479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.976 qpair failed and we were unable to recover it. 00:26:42.976 [2024-07-15 14:05:37.509797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.976 [2024-07-15 14:05:37.509862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.976 qpair failed and we were unable to recover it. 00:26:42.976 [2024-07-15 14:05:37.510139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.976 [2024-07-15 14:05:37.510219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.976 qpair failed and we were unable to recover it. 00:26:42.976 [2024-07-15 14:05:37.510526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.976 [2024-07-15 14:05:37.510607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.510936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.511001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.511354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.511416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.511667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.511728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.512084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.512161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.512474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.512550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.512768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.512830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.513203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.513283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.513562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.513639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.513927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.513989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.514261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.514338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.514639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.514699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.515038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.515118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.515438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.515514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.515824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.515888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.516199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.516277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.516596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.516672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.516968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.517030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.517346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.517424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.517752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.517813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.518087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.518148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.518417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.518495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.518832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.518895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.519221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.519300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.519624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.519701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.520047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.520109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.520419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.520496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.520806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.520868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.521181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.521259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.521507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.521583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.521963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.522025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.977 [2024-07-15 14:05:37.522320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.977 [2024-07-15 14:05:37.522397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.977 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.522671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.522731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.523072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.523134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.523462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.523540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.523862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.523932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.524260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.524336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.524658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.524736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.525072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.525149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.525481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.525558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.525839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.525900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.526280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.526357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.526625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.526684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.526964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.527042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.527362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.527439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.527782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.527843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.528181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.528258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.528576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.528654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.528976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.529038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.529315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.529393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.529703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.529782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.530054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.530114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.530395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.530472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.530785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.530847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.531083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.531161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.531475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.531556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.531851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.531912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.532225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.532303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.532629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.532710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.533013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.533099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.533364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.533443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.533758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.533820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.534148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.534226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.534506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.534582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.534901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.534963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.535293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.535375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.535647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.535706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.536012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.536074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.536360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.536437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.536765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.536827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.537152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.537212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.537496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.978 [2024-07-15 14:05:37.537576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.978 qpair failed and we were unable to recover it. 00:26:42.978 [2024-07-15 14:05:37.537863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.537925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.538243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.538323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.538645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.538721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.539067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.539137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.539425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.539502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.539826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.539889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.540218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.540282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.540525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.540602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.540858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.540920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.541242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.541320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.541573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.541638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.541951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.542014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.542339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.542414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.542720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.542808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.543021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.543097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.543404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.543483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.543816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.543879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.544189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.544273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.544523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.544582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.544900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.544965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.545211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.545287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.545551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.545627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.545903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.545982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.546227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.546287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.546603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.546680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.546967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.547046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.547305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.547381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.547680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.547758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.548089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.548166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.548446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.548524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.548848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.548927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.549242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.549303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.549584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.979 [2024-07-15 14:05:37.549661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.979 qpair failed and we were unable to recover it. 00:26:42.979 [2024-07-15 14:05:37.549995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.550072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.550359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.550436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.550707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.550782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.551058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.551135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.551448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.551526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.551867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.551928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.552213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.552290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.552483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.552571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.552842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.552920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.553249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.553327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.553622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.553691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.553988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.554072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.554390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.554467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.554873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.554935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.555220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.555298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.555617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.555694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.556030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.556091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.556371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.556448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.556790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.556852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.557187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.557266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.557584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.557661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.557964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.558026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.558354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.558432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.558702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.558777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.559051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.559111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.559400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.559477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.559766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.559827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.560146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.560205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.560497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.560575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.560846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.560906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.561184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.561262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.561574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.561652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.561987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.562048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.562337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.562414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.562701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.562773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.563096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.563173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.563455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.563533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.563844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.563905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.564262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.980 [2024-07-15 14:05:37.564339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.980 qpair failed and we were unable to recover it. 00:26:42.980 [2024-07-15 14:05:37.564654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.981 [2024-07-15 14:05:37.564731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.981 qpair failed and we were unable to recover it. 00:26:42.981 [2024-07-15 14:05:37.565071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.981 [2024-07-15 14:05:37.565149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.981 qpair failed and we were unable to recover it. 00:26:42.981 [2024-07-15 14:05:37.565472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.981 [2024-07-15 14:05:37.565549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.981 qpair failed and we were unable to recover it. 00:26:42.981 [2024-07-15 14:05:37.565868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.981 [2024-07-15 14:05:37.565931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.981 qpair failed and we were unable to recover it. 00:26:42.981 [2024-07-15 14:05:37.566224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.981 [2024-07-15 14:05:37.566302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.981 qpair failed and we were unable to recover it. 00:26:42.981 [2024-07-15 14:05:37.566564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.981 [2024-07-15 14:05:37.566622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.981 qpair failed and we were unable to recover it. 00:26:42.981 [2024-07-15 14:05:37.566913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.981 [2024-07-15 14:05:37.566991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.981 qpair failed and we were unable to recover it. 00:26:42.981 [2024-07-15 14:05:37.567275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.981 [2024-07-15 14:05:37.567352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.981 qpair failed and we were unable to recover it. 00:26:42.981 [2024-07-15 14:05:37.567663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.981 [2024-07-15 14:05:37.567723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.981 qpair failed and we were unable to recover it. 00:26:42.981 [2024-07-15 14:05:37.568069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.981 [2024-07-15 14:05:37.568159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.981 qpair failed and we were unable to recover it. 00:26:42.981 [2024-07-15 14:05:37.568526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.981 [2024-07-15 14:05:37.568606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.981 qpair failed and we were unable to recover it. 00:26:42.981 [2024-07-15 14:05:37.568893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.981 [2024-07-15 14:05:37.568963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.981 qpair failed and we were unable to recover it. 00:26:42.981 [2024-07-15 14:05:37.569389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.981 [2024-07-15 14:05:37.569466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.981 qpair failed and we were unable to recover it. 00:26:42.981 [2024-07-15 14:05:37.569772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.981 [2024-07-15 14:05:37.569831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.981 qpair failed and we were unable to recover it. 00:26:42.981 [2024-07-15 14:05:37.570160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.981 [2024-07-15 14:05:37.570221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.981 qpair failed and we were unable to recover it. 00:26:42.981 [2024-07-15 14:05:37.570492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.981 [2024-07-15 14:05:37.570552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.981 qpair failed and we were unable to recover it. 00:26:42.981 [2024-07-15 14:05:37.570858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.981 [2024-07-15 14:05:37.570919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.981 qpair failed and we were unable to recover it. 00:26:42.981 [2024-07-15 14:05:37.571246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.981 [2024-07-15 14:05:37.571323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.981 qpair failed and we were unable to recover it. 00:26:42.981 [2024-07-15 14:05:37.571648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.981 [2024-07-15 14:05:37.571725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.981 qpair failed and we were unable to recover it. 00:26:42.981 [2024-07-15 14:05:37.571995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.981 [2024-07-15 14:05:37.572073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.981 qpair failed and we were unable to recover it. 00:26:42.981 [2024-07-15 14:05:37.572338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.981 [2024-07-15 14:05:37.572415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.981 qpair failed and we were unable to recover it. 00:26:42.981 [2024-07-15 14:05:37.572787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.981 [2024-07-15 14:05:37.572849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.981 qpair failed and we were unable to recover it. 00:26:42.981 [2024-07-15 14:05:37.573173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.981 [2024-07-15 14:05:37.573251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.981 qpair failed and we were unable to recover it. 00:26:42.981 [2024-07-15 14:05:37.573562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.981 [2024-07-15 14:05:37.573641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.981 qpair failed and we were unable to recover it. 00:26:42.981 [2024-07-15 14:05:37.573939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.981 [2024-07-15 14:05:37.574001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.981 qpair failed and we were unable to recover it. 00:26:42.981 [2024-07-15 14:05:37.574324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.981 [2024-07-15 14:05:37.574403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.981 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.574714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.574793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.575068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.575127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.575391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.575469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.575790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.575853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.576099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.576177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.576492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.576573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.576894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.576956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.577279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.577359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.577640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.577717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.578012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.578090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.578351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.578428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.578726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.578804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.579133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.579212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.579523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.579600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.579867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.579929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.580214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.580292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.580595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.580672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.581006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.581085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.581422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.581483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.581806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.581867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.582129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.582210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.582508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.582584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.582861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.582923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.583175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.583253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.583552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.583611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.583908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.583987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.584323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.584401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.584689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.584763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.585049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.585108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.585478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.585557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.585877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.585939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.586262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.586341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.586654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.586714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.587042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.587102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.587412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.587490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.587786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.587850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.588191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.588252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.588567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.982 [2024-07-15 14:05:37.588644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.982 qpair failed and we were unable to recover it. 00:26:42.982 [2024-07-15 14:05:37.588984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.589046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.589345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.589421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.589767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.589829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.590155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.590215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.590473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.590549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.590765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.590825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.591111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.591189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.591441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.591517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.592668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.592715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.592890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.592919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.593115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.593167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.593339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.593402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.593611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.593673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.593835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.593864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.594046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.594103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.594302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.594353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.594533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.594586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.594804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.594855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.595058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.595109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.595330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.595395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.595558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.595587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.595696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.595735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.595890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.595938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.596098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.596145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.596260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.596310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.596519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.596547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.596752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.596793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.596923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.596970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.597191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.597238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.597484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.597532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.597713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.597751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.597894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.597922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.983 [2024-07-15 14:05:37.598108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.983 [2024-07-15 14:05:37.598153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.983 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.598420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.598468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.598714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.598752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.598913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.598941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.599107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.599155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.599384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.599436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.599649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.599677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.599826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.599854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.600010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.600056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.600232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.600279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.600431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.600476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.600628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.600655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.600886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.600934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.601180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.601226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.601501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.601548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.601839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.601868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.602006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.602053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.602210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.602256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.602428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.602474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.602632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.602660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.602891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.602937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.603202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.603251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.603461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.603512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.603721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.603760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.603913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.603941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.604189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.604235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.604454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.604499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.604732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.604788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.604913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.604942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.605081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.605126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.605317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.605362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.605616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.605661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.605882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.605912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.606071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.606101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.606305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.606336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.606593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.606638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.606890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.606936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.607118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.984 [2024-07-15 14:05:37.607164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.984 qpair failed and we were unable to recover it. 00:26:42.984 [2024-07-15 14:05:37.607379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.607424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.607632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.607662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.607833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.607878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.608097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.608142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.608355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.608401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.608601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.608630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.608802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.608847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.608992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.609038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.609228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.609273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.609470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.609515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.609712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.609750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.609945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.609975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.610193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.610238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.610444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.610489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.610708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.610746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.610970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.611015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.611236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.611282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.611483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.611526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.611658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.611696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.611878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.611907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.612073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.612101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.612342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.612371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.612606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.612635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.612857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.612886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.613064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.613098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.613302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.613330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.613508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.613551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.613747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.613776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.613949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.613976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.614154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.614182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.614331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.614359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.614500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.614528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.614705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.614732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.614892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.614920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.615027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.615055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.615226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.615254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.615424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.615451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.615628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.615656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.615792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.615820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.615977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.985 [2024-07-15 14:05:37.616004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.985 qpair failed and we were unable to recover it. 00:26:42.985 [2024-07-15 14:05:37.616147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.616173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.616327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.616353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.616503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.616529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.616671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.616698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.616822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.616850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.616973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.616999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.617153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.617180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.617424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.617452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.617568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.617595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.617747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.617775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.617929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.617956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.618092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.618118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.618270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.618296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.618432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.618459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.618630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.618657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.618766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.618808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.618940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.618966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.619123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.619149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.619320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.619346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.619499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.619525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.619696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.619722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.619905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.619931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.620068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.620093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.620236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.620262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.620388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.620418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.620589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.620615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.620786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.620812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.620930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.620956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.621094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.621120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.621352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.621377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.621599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.621625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.621820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.621846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.621965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.621990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.622135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.622160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.986 qpair failed and we were unable to recover it. 00:26:42.986 [2024-07-15 14:05:37.622332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.986 [2024-07-15 14:05:37.622358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.622473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.622498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.622639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.622665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.622824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.622851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.622959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.622985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.623132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.623158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.623294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.623334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.623470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.623495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.623669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.623695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.623848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.623875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.623992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.624018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.624203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.624228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.624367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.624393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.624537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.624563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.624702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.624728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.624845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.624871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.624980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.625005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.625225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.625251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.625469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.625495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.625763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.625791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.625959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.625985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.626144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.626169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.626328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.626354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.626499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.626525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.626636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.626662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.626805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.626832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.626947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.626973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.627122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.627148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.627302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.627328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.627457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.627483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.627711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.627769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.627921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.627947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.628091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.628117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.628259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.628285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.628475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.628499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.628659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.628684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.628828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.628855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.628977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.629003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.629172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.629198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.629365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.629392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.987 [2024-07-15 14:05:37.629536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.987 [2024-07-15 14:05:37.629568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.987 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.629734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.629767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.629886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.629912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.630069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.630108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.630223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.630249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.630422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.630447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.630596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.630622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.630761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.630788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.630905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.630931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.631083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.631108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.631279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.631305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.631503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.631528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.631709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.631734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.631904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.631930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.632102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.632128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.632230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.632256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.632435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.632475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.632664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.632689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.632817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.632844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.633018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.633057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.633241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.633267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.633380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.633406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.633559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.633585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.633695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.633721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.633899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.633926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.634048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.634072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.634232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.634258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.634407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.634433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.634608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.634634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.634775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.634801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.634929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.634959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.635081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.635107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.635266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.635292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.635458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.635484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.635652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.635677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.635809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.635836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.635961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.635986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.636158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.636182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.636362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.636387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.636636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.636663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.636826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.988 [2024-07-15 14:05:37.636853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.988 qpair failed and we were unable to recover it. 00:26:42.988 [2024-07-15 14:05:37.637026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.637091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.637310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.637358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.637531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.637559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.637753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.637796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.637915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.637942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.638142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.638193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.638343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.638391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.638547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.638574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.638722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.638759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.638884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.638909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.639105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.639158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.639298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.639347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.639570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.639598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.639721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.639758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.639887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.639912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.640047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.640075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.640999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.641051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.641313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.641340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.641583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.641609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.641805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.641833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.641959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.641984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.642158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.642185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.642315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.642356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.642503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.642545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.642694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.642721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.642851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.642877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.643018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.643043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.643176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.643202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.643344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.643370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.643516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.643546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.643723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.643755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.643859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.643885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.643991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.644017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.644127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.644153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.644313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.644340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.644470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.644496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.644664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.644689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.644821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.644849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.644964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.644989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.645094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.645120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.989 [2024-07-15 14:05:37.645254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.989 [2024-07-15 14:05:37.645280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.989 qpair failed and we were unable to recover it. 00:26:42.990 [2024-07-15 14:05:37.645436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.990 [2024-07-15 14:05:37.645461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.990 qpair failed and we were unable to recover it. 00:26:42.990 [2024-07-15 14:05:37.645643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.990 [2024-07-15 14:05:37.645668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.990 qpair failed and we were unable to recover it. 00:26:42.990 [2024-07-15 14:05:37.645832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.990 [2024-07-15 14:05:37.645859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.990 qpair failed and we were unable to recover it. 00:26:42.990 [2024-07-15 14:05:37.645972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.990 [2024-07-15 14:05:37.645999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.990 qpair failed and we were unable to recover it. 00:26:42.990 [2024-07-15 14:05:37.646142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.990 [2024-07-15 14:05:37.646169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.990 qpair failed and we were unable to recover it. 00:26:42.990 [2024-07-15 14:05:37.646338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.990 [2024-07-15 14:05:37.646363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.990 qpair failed and we were unable to recover it. 00:26:42.990 [2024-07-15 14:05:37.646471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.990 [2024-07-15 14:05:37.646496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.990 qpair failed and we were unable to recover it. 00:26:42.990 [2024-07-15 14:05:37.646652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.990 [2024-07-15 14:05:37.646678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.990 qpair failed and we were unable to recover it. 00:26:42.990 [2024-07-15 14:05:37.646812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.990 [2024-07-15 14:05:37.646840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.990 qpair failed and we were unable to recover it. 00:26:42.990 [2024-07-15 14:05:37.646934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.990 [2024-07-15 14:05:37.646959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.990 qpair failed and we were unable to recover it. 00:26:42.990 [2024-07-15 14:05:37.647108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.990 [2024-07-15 14:05:37.647133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.990 qpair failed and we were unable to recover it. 00:26:42.990 [2024-07-15 14:05:37.647304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.990 [2024-07-15 14:05:37.647330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.990 qpair failed and we were unable to recover it. 00:26:42.990 [2024-07-15 14:05:37.647478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.990 [2024-07-15 14:05:37.647504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.990 qpair failed and we were unable to recover it. 00:26:42.990 [2024-07-15 14:05:37.647655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.990 [2024-07-15 14:05:37.647681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.990 qpair failed and we were unable to recover it. 00:26:42.990 [2024-07-15 14:05:37.647825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.990 [2024-07-15 14:05:37.647852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:42.990 qpair failed and we were unable to recover it. 00:26:42.990 [2024-07-15 14:05:37.648004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.990 [2024-07-15 14:05:37.648048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.990 qpair failed and we were unable to recover it. 00:26:42.990 [2024-07-15 14:05:37.648169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.990 [2024-07-15 14:05:37.648196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.990 qpair failed and we were unable to recover it. 00:26:42.990 [2024-07-15 14:05:37.648370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.990 [2024-07-15 14:05:37.648396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.990 qpair failed and we were unable to recover it. 00:26:42.990 [2024-07-15 14:05:37.648539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.990 [2024-07-15 14:05:37.648565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.990 qpair failed and we were unable to recover it. 00:26:42.990 [2024-07-15 14:05:37.648700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.990 [2024-07-15 14:05:37.648725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.990 qpair failed and we were unable to recover it. 00:26:42.990 [2024-07-15 14:05:37.648849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.990 [2024-07-15 14:05:37.648875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.990 qpair failed and we were unable to recover it. 00:26:42.990 [2024-07-15 14:05:37.648977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.990 [2024-07-15 14:05:37.649009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.990 qpair failed and we were unable to recover it. 00:26:42.990 [2024-07-15 14:05:37.649175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.990 [2024-07-15 14:05:37.649201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.990 qpair failed and we were unable to recover it. 00:26:42.990 [2024-07-15 14:05:37.649333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.990 [2024-07-15 14:05:37.649358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.990 qpair failed and we were unable to recover it. 00:26:42.990 [2024-07-15 14:05:37.649501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.990 [2024-07-15 14:05:37.649526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.990 qpair failed and we were unable to recover it. 00:26:42.990 [2024-07-15 14:05:37.649666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.990 [2024-07-15 14:05:37.649691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.990 qpair failed and we were unable to recover it. 00:26:42.990 [2024-07-15 14:05:37.649829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.990 [2024-07-15 14:05:37.649855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.990 qpair failed and we were unable to recover it. 00:26:42.990 [2024-07-15 14:05:37.649957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.990 [2024-07-15 14:05:37.649986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.990 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.650133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.650158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.650283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.650308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.650436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.650461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.650616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.650642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.650743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.650770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.650906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.650932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.651096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.651121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.651256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.651282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.651440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.651479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.651657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.651697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.651880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.651906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.652053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.652078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.652260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.652310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.652505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.652555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.652817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.652844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.652950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.652976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.653171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.653220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.653440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.653482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.653647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.653699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.653878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.653904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.654106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.654156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.654365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.654414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.654628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.654678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.654872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.654899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.655056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.655106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.655308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.655358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.655675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.655725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.655897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.655923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.656118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.656168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.656356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.656406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.656572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.656622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.656808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.656835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.656969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.656995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.657133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.657183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.657332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.657382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.657570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.657620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.657839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.657865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.991 [2024-07-15 14:05:37.657970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.991 [2024-07-15 14:05:37.657996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.991 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.658171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.658222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.658447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.658497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.658702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.658798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.658960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.658989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.659157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.659215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.659425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.659474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.659717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.659789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.659928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.659953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.660085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.660143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.660341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.660366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.660567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.660617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.660813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.660839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.660968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.660993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.661134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.661159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.661253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.661278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.661467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.661518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.661687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.661749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.661934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.661959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.662128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.662178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.662410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.662460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.662617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.662667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.662872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.662897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.663025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.663050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.663214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.663264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.663461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.663510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.663710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.663804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.663967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.664001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.664166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.664217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.664413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.664463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.664643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.664693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.664891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.664920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.665035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.665096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.665288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.665338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.665581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.992 [2024-07-15 14:05:37.665631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.992 qpair failed and we were unable to recover it. 00:26:42.992 [2024-07-15 14:05:37.665855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.665881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.666007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.666032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.666223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.666274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.666474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.666523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.666771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.666813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.666928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.666953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.667116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.667175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.667392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.667443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.667601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.667651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.667836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.667862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.668013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.668069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.668224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.668274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.668468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.668518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.668750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.668808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.668906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.668930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.669087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.669149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.669388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.669439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.669602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.669652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.669882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.669909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.670087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.670138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.670322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.670379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.670580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.670635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.670798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.670829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.670963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.670988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.671172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.671224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.671416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.671466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.671663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.671714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.671893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.671918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.672074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.672124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.672290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.672341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.672524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.672574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.672730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.672805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.672940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.672966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.673134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.673158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.673320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.673344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.673465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.673489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.673639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.673664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.673865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.673891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.674045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.993 [2024-07-15 14:05:37.674085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.993 qpair failed and we were unable to recover it. 00:26:42.993 [2024-07-15 14:05:37.674266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.674291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.674492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.674557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.674820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.674846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.674958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.674984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.675095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.675127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.675316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.675349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.675559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.675592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.675744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.675801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.675942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.675967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.676088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.676121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.676280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.676313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.676437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.676470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.676667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.676707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.676888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.676914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.677058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.677099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.677267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.677307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.677519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.677569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.677725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.677814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.677925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.677951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.678124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.678158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.678319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.678353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.678539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.678573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.678736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.678780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.678930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.678955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.679114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.679147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.679330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.679374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.679530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.679563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.679729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.679770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.679898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.679923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.680108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.680141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.680301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.680334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.994 [2024-07-15 14:05:37.680482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.994 [2024-07-15 14:05:37.680514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.994 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.680668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.680701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.680882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.680907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.681052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.681090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.681252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.681285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.681422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.681454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.681614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.681647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.681805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.681831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.681948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.681973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.682141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.682174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.682326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.682357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.682481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.682513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.682698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.682730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.682877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.682902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.683000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.683045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.683196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.683227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.683396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.683427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.683600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.683632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.683820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.683846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.683956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.683981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.684153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.684183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.684369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.684404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.684553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.684583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.684775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.684819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.684926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.684951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.685104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.685134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.685242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.685273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.685450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.685480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.685604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.685634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.685811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.685837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.685951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.685976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.686121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.686160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.686338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.686369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.686546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.686575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.686699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.686729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.995 qpair failed and we were unable to recover it. 00:26:42.995 [2024-07-15 14:05:37.686859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.995 [2024-07-15 14:05:37.686884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.686992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.687018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.687172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.687201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.687337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.687367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.687521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.687550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.687743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.687773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.687904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.687929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.688124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.688153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.688340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.688369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.688571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.688600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.688748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.688778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.688894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.688919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.689066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.689096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.689270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.689303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.689458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.689487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.689643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.689672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.689812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.689838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.689951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.689976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.690169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.690198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.690350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.690379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.690536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.690565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.690745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.690794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.690906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.690931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.691107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.691135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.691294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.691323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.691482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.691521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.691672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.691700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.691855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.691881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.692058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.692085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.692228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.692256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.692429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.692457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.692576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.692604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.996 [2024-07-15 14:05:37.692749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.996 [2024-07-15 14:05:37.692792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.996 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.692924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.692949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.693080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.693131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.693308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.693336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.693485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.693512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.693660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.693688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.693836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.693862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.693967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.693992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.694147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.694174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.694293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.694320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.694502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.694529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.694705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.694732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.694870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.694895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.695000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.695050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.695187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.695215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.695424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.695451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.695638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.695665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.695793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.695819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.695922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.695947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.696062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.696089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.696264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.696291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.696397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.696424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.696563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.696590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.696742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.696769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.696881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.696907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.697040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.697066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.697209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.697235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.697360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.697386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.697484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.697510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.697621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.697647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.697766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.697808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.697964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.697989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.698146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.698172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.698328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.698354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.698565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.698590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.698767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.698803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.698918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.698945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.699076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.699102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.997 [2024-07-15 14:05:37.699228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.997 [2024-07-15 14:05:37.699255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.997 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.699392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.699418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.699614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.699640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.699772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.699799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.699909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.699935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.700134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.700160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.700287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.700322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.700479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.700504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.700679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.700705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.700851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.700876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.701024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.701049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.701189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.701218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.701374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.701400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.701542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.701567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.701702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.701728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.701875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.701900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.702052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.702077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.702172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.702197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.702340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.702365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.702501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.702527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.702643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.702668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.702827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.702853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.702966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.702999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.703147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.703172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.703345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.703371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.703477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.703503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.703686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.703710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.703839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.703864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.703976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.704009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.704186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.704220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.704375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.704399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.704546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.704578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.704720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.704766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.704861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.704887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.705067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.705091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.705231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.705255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.705423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.705447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.705569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.705594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.705748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.705778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.705893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.705917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.706017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.998 [2024-07-15 14:05:37.706056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.998 qpair failed and we were unable to recover it. 00:26:42.998 [2024-07-15 14:05:37.706251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.706275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.706385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.706409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.706538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.706562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.706667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.706692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.706846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.706871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.706969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.706994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.707169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.707203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.707310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.707334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.707478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.707502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.707673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.707697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.707853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.707878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.708005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.708048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.708202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.708240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.708343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.708383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.708535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.708569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.708747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.708775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.708908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.708934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.709050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.709075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.709203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.709226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.709345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.709369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.709571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.709595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.709765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.709791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.709899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.709925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.710138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.710161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.710308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.710331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.710534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.710558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.710763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.710787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.710928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.710953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.711157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.711181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.711332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.711355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.711485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.711509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.711687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.711725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.711864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.711890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.712020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.712045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.712207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.712231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.712379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.712418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.712598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.712621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.712781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.712816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.712929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.712954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.713158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.713182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.713305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.999 [2024-07-15 14:05:37.713328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:42.999 qpair failed and we were unable to recover it. 00:26:42.999 [2024-07-15 14:05:37.713480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.713504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.713660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.713699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.713878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.713904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.714003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.714028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.714224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.714247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.714442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.714466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.714651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.714674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.714841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.714867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.715128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.715152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.715288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.715312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.715494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.715518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.715703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.715748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.715864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.715889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.716014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.716054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.716185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.716209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.716388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.716443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.716655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.716706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.716892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.716917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.717103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.717154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.717356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.717403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.717658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.717708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.717899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.717924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.718066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.718090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.718272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.718318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.718499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.718552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.718750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.718810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.718945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.718969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.719145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.719196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.719398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.719445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.719661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.719707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.719890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.719914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.720126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.720172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.720362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.720409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.720606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.720653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.720836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.000 [2024-07-15 14:05:37.720861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.000 qpair failed and we were unable to recover it. 00:26:43.000 [2024-07-15 14:05:37.720988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.721012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.721159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.721217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.721390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.721436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.721695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.721727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.721904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.721928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.722100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.722145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.722355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.722402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.722603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.722653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.722851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.722876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.723001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.723044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.723255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.723302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.723501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.723548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.723762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.723807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.723920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.723944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.724092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.724139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.724332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.724379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.724571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.724627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.724828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.724853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.724965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.724998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.725166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.725213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.725422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.725483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.725680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.725728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.725921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.725945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.726127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.726174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.726393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.726440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.726620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.726667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.726855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.726880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.727052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.727099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.727311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.727367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.727550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.727598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.727760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.727821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.727948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.727988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.728141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.728188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.728372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.728418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.728628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.728677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.728908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.728933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.729053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.729078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.729261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.729284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.729501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.729524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.729647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.729670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.729832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.001 [2024-07-15 14:05:37.729873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.001 qpair failed and we were unable to recover it. 00:26:43.001 [2024-07-15 14:05:37.730020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.730045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.730232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.730279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.730455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.730510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.730695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.730752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.730924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.730972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.731157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.731211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.731393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.731440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.731648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.731699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.731900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.731950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.732154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.732204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.732514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.732565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.732758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.732825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.732989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.733061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.733265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.733316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.733533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.733583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.733788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.733839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.733998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.734034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.734215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.734263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.734443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.734489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.734714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.734783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.734896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.734930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.735150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.735198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.735398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.735445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.735626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.735672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.735836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.735871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.736014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.736073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.736274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.736321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.736513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.736560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.736805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.736840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.736959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.737002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.737247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.737294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.737470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.737517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.737679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.737725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.737909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.737945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.738086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.738133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.738357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.738405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.738616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.738663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.738851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.738886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.739066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.739115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.739297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.739345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.739568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.739619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.002 qpair failed and we were unable to recover it. 00:26:43.002 [2024-07-15 14:05:37.739838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.002 [2024-07-15 14:05:37.739873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.740062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.740108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.740309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.740357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.740563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.740613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.740812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.740846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.741014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.741071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.741253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.741300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.741520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.741566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.741758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.741821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.741975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.742010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.742236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.742286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.742518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.742569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.742799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.742834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.742961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.743000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.743233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.743284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.743515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.743562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.743761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.743823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.743951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.743996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.744183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.744230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.744413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.744460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.744630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.744676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.744869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.744904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.745071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.745119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.745336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.745382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.745623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.745673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.745927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.745962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.746150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.746201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.746417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.746467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.746684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.746734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.746935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.746975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.747152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.747195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.747350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.747384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.747608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.747642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.747791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.747826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.747955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.748002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.748137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.748172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.748332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.748366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.748560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.748609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.748816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.748866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.748974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.749009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.749223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.749247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.749426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.003 [2024-07-15 14:05:37.749449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.003 qpair failed and we were unable to recover it. 00:26:43.003 [2024-07-15 14:05:37.749611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.749634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.749790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.749838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.749981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.750023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.750143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.750176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.750362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.750395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.750583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.750633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.750823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.750857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.751024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.751091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.751280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.751331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.751523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.751574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.751805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.751839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.751970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.752003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.752234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.752285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.752491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.752548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.752754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.752816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.752944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.752977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.753199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.753250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.753456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.753506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.753690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.753763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.753916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.753949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.754112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.754162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.754374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.754432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.754626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.754676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.754869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.754902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.755018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.755077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.755303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.755354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.755528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.755578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.755790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.755824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.755998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.756050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.756250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.756300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.756490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.756541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.756748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.756812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.756938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.756970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.757159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.757210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.757412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.757461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.004 [2024-07-15 14:05:37.757619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.004 [2024-07-15 14:05:37.757668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.004 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.757840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.757873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.758036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.758105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.758309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.758368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.758540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.758590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.758806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.758839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.759008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.759058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.759302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.759352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.759549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.759600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.759814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.759848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.759976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.760009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.760207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.760258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.760476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.760527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.760755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.760810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.760938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.760971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.761137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.761188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.761394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.761443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.761665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.761714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.761896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.761929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.762101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.762151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.762413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.762464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.762699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.762763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.762936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.762968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.763133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.763183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.763410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.763459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.763680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.763730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.763921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.763954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.764132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.764187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.764385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.764434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.764627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.764676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.764865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.764898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.765050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.765085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.765318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.765368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.765533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.765583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.765806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.765840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.765993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.766026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.766237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.766286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.766520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.766569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.766822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.766855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.766982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.767023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.767253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.767303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.767553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.005 [2024-07-15 14:05:37.767602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.005 qpair failed and we were unable to recover it. 00:26:43.005 [2024-07-15 14:05:37.767819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.767852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.767979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.768016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.768184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.768233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.768423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.768472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.768690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.768752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.768908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.768945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.769132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.769182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.769381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.769430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.769670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.769720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.769930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.769962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.770162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.770212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.770443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.770493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.770801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.770835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.770988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.771021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.771231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.771282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.771499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.771548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.771733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.771807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.771958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.771990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.772205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.772271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.772557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.772606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.772818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.772851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.773019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.773077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.773377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.773444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.773672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.773722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.773902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.773935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.774130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.774197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.774389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.774454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.774692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.774753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.774934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.774967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.775167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.775232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.775514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.775580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.775842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.775876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.776039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.776110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.776407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.776457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.776763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.776813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.776966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.777003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.777213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.777280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.777574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.777639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.777845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.777878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.778005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.778038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.006 [2024-07-15 14:05:37.778244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.006 [2024-07-15 14:05:37.778311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.006 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.778568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.778618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.778883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.778917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.779070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.779137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.779426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.779493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.779785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.779837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.780133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.780199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.780429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.780496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.780750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.780800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.781022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.781099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.781326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.781393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.781592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.781626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.781817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.781851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.782022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.782055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.782265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.782334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.782576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.782608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.782768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.782801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.782963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.782995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.783167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.783227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.783385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.783439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.783592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.783624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.783757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.783791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.783962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.784036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.784307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.784339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.784495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.784527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.784722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.784762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.784925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.784994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.785214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.785247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.785412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.785444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.785572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.785605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.785820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.785886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.786076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.786108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.786262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.786294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.786450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.786485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.786681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.786715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.786923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.786956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.787140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.787172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.787350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.787401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.787665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.787699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.787921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.787955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.788111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-15 14:05:37.788144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.007 qpair failed and we were unable to recover it. 00:26:43.007 [2024-07-15 14:05:37.788307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.788340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.788610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.788642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.788759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.788793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.788936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.788969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.789218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.789284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.789552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.789586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.789744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.789778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.789930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.789962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.790082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.790114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.790276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.790338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.790546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.790578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.790825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.790876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.791176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.791226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.791500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.791534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.791710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.791751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.791914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.791946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.792096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.792164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.792359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.792427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.792693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.792747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.792950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.792983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.793139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.793171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.793357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.793423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.793656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.793690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.793913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.793946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.794062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.794095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.794258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.794325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.794621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.794655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.794876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.794910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.795068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.795101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.795310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.795377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.795625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.795657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.795844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.795895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.008 qpair failed and we were unable to recover it. 00:26:43.008 [2024-07-15 14:05:37.796058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-15 14:05:37.796090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.009 qpair failed and we were unable to recover it. 00:26:43.009 [2024-07-15 14:05:37.796253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-15 14:05:37.796286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.009 qpair failed and we were unable to recover it. 00:26:43.009 [2024-07-15 14:05:37.796477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-15 14:05:37.796510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.009 qpair failed and we were unable to recover it. 00:26:43.009 [2024-07-15 14:05:37.796693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-15 14:05:37.796770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.009 qpair failed and we were unable to recover it. 00:26:43.009 [2024-07-15 14:05:37.797023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-15 14:05:37.797056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.009 qpair failed and we were unable to recover it. 00:26:43.009 [2024-07-15 14:05:37.797255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-15 14:05:37.797287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.009 qpair failed and we were unable to recover it. 00:26:43.009 [2024-07-15 14:05:37.797498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-15 14:05:37.797565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.009 qpair failed and we were unable to recover it. 00:26:43.009 [2024-07-15 14:05:37.797766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-15 14:05:37.797804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.009 qpair failed and we were unable to recover it. 00:26:43.009 [2024-07-15 14:05:37.797967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-15 14:05:37.797999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.009 qpair failed and we were unable to recover it. 00:26:43.009 [2024-07-15 14:05:37.798193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-15 14:05:37.798226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.009 qpair failed and we were unable to recover it. 00:26:43.009 [2024-07-15 14:05:37.798397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-15 14:05:37.798430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.009 qpair failed and we were unable to recover it. 00:26:43.009 [2024-07-15 14:05:37.798623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-15 14:05:37.798656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.009 qpair failed and we were unable to recover it. 00:26:43.009 [2024-07-15 14:05:37.798842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-15 14:05:37.798876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.009 qpair failed and we were unable to recover it. 00:26:43.009 [2024-07-15 14:05:37.799007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-15 14:05:37.799040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.009 qpair failed and we were unable to recover it. 00:26:43.009 [2024-07-15 14:05:37.799236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-15 14:05:37.799297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.009 qpair failed and we were unable to recover it. 00:26:43.009 [2024-07-15 14:05:37.799525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-15 14:05:37.799558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.009 qpair failed and we were unable to recover it. 00:26:43.009 [2024-07-15 14:05:37.799730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-15 14:05:37.799771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.009 qpair failed and we were unable to recover it. 00:26:43.009 [2024-07-15 14:05:37.799904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-15 14:05:37.799937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.009 qpair failed and we were unable to recover it. 00:26:43.009 [2024-07-15 14:05:37.800085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-15 14:05:37.800119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.009 qpair failed and we were unable to recover it. 00:26:43.009 [2024-07-15 14:05:37.800360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-15 14:05:37.800392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.009 qpair failed and we were unable to recover it. 00:26:43.009 [2024-07-15 14:05:37.800533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-15 14:05:37.800566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.009 qpair failed and we were unable to recover it. 00:26:43.009 [2024-07-15 14:05:37.800772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-15 14:05:37.800818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.009 qpair failed and we were unable to recover it. 00:26:43.009 [2024-07-15 14:05:37.801089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-15 14:05:37.801135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.009 qpair failed and we were unable to recover it. 00:26:43.009 [2024-07-15 14:05:37.801417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-15 14:05:37.801490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 14:05:37.801754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 14:05:37.801789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 14:05:37.801912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 14:05:37.801946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 14:05:37.802125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 14:05:37.802194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 14:05:37.802482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 14:05:37.802515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 14:05:37.802655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 14:05:37.802688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 14:05:37.802990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 14:05:37.803024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 14:05:37.803184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 14:05:37.803216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 14:05:37.803461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 14:05:37.803494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 14:05:37.803650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 14:05:37.803683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 14:05:37.803953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 14:05:37.804017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 14:05:37.804242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 14:05:37.804275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 14:05:37.804492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 14:05:37.804525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 14:05:37.804709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 14:05:37.804752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 14:05:37.805029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 14:05:37.805062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 14:05:37.805268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 14:05:37.805301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 14:05:37.805575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 14:05:37.805609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 14:05:37.805914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 14:05:37.805948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 14:05:37.806139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 14:05:37.806177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 14:05:37.806392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 14:05:37.806425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.278 [2024-07-15 14:05:37.806569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.278 [2024-07-15 14:05:37.806602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.278 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.806825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.806858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.807086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.807118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.807339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.807372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.807549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.807581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.807787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.807842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.808036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.808069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.808275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.808307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.808582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.808633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.808867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.808901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.809063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.809096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.809302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.809370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.809643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.809676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.809866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.809900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.810174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.810240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.810477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.810543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.810833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.810866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.811054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.811094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.811294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.811360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.811600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.811633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.811817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.811853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.812079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.812147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.812393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.812444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.812705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.812755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.812952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.813022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.813268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.813335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.813606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.813639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.813789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.813823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.813972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.814044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.814321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.814354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.814504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.814537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.814807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.814859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.815097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.815130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.815322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.815354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.815585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.815635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.815849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.815883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.816100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.816133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.816307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.816373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.816557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.816622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.816886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.816920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.817096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.279 [2024-07-15 14:05:37.817169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.279 qpair failed and we were unable to recover it. 00:26:43.279 [2024-07-15 14:05:37.817446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.817478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.817632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.817665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.817937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.818008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.818270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.818338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.818565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.818628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.818873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.818923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.819204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.819236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.819468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.819500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.819809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.819843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.820011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.820079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.820344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.820397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.820634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.820688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.820975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.821008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.821236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.821303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.821581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.821613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.821746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.821780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.821925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.821958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.822182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.822248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.822576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.822609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.822781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.822817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.823020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.823075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.823360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.823426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.823615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.823676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.823906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.823940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.824136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.824169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.824400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.824474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.824689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.824721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.824943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.824995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.825303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.825335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.825554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.825587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.825786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.825820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.826047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.826116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.826375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.826408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.826533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.826565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.826725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.826782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.827006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.827074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.827318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.827351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.827567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.827622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.280 [2024-07-15 14:05:37.827873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.280 [2024-07-15 14:05:37.827944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.280 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.828239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.828272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.828456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.828491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.828677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.828727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.828991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.829025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.829234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.829267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.829511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.829578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.829823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.829894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.830174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.830207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.830378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.830410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.830567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.830628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.830861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.830931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.831143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.831176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.831335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.831368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.831585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.831643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.831900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.831970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.832244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.832276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.832392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.832425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.832619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.832670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.832913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.832982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.833213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.833246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.833397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.833461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.833704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.833766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.834042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.834075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.834302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.834334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.834513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.834581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.834831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.834900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.835173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.835205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.835443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.835476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.835667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.835717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.835984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.836052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.836322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.836356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.836559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.836592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.836812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.836883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.837155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.837220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.837434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.837466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.837696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.837729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.837984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.838041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.838265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.838333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.838560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.838593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.838783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.838816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.281 [2024-07-15 14:05:37.839062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.281 [2024-07-15 14:05:37.839138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.281 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.839391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.839458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.839708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.839771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.839967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.840000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.840240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.840290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.840588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.840655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.840934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.840968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.841152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.841191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.841423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.841493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.841815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.841887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.842145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.842178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.842358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.842391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.842619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.842691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.842979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.843047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.843257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.843290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.843504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.843536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.843728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.843792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.844031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.844102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.844382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.844415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.844599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.844631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.844931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.844999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.845348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.845381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.845624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.845695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.846012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.846067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.846285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.846318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.846492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.846525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.846705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.846784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.847049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.847116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.847384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.847417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.847646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.847714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.848006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.848058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.848247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.848279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.848479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.848512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.848731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.848810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.282 [2024-07-15 14:05:37.849098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.282 [2024-07-15 14:05:37.849131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.282 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.849324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.849357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.849609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.849676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.849909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.849961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.850155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.850222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.850483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.850516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.850691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.850755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.851071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.851147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.851385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.851417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.851603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.851636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.851870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.851904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.852120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.852153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.852302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.852368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.852616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.852675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.852857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.852890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.853039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.853072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.853243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.853275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.853505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.853556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.853837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.853871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.854077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.854127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.854432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.854501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.854784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.854835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.855052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.855084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.855394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.855426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.855618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.855668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.855930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.855964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.856187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.856255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.856447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.856516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.856796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.856830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.856987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.857021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.857266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.857336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.857559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.857626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.857896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.857930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.858132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.858165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.858413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.858491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.858734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.858775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.858981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.859054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.859277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.859309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.859511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.859580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.859797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.859830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.860018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.860050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.860304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.283 [2024-07-15 14:05:37.860354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.283 qpair failed and we were unable to recover it. 00:26:43.283 [2024-07-15 14:05:37.860622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.860654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.860801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.860836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.861034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.861106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.861425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.861458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.861649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.861699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.861955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.862013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.862275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.862308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.862568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.862636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.862906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.862974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.863262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.863294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.863452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.863485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.863716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.863758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.863985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.864052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.864268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.864301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.864573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.864606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.864867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.864919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.865195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.865228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.865461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.865493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.865715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.865779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.866047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.866122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.866330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.866362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.866608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.866640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.866958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.867017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.867280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.867313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.867523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.867556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.867762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.867813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.868074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.868124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.868360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.868411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.868590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.868623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.868838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.868872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.869106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.869173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.869452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.869520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.869732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.869774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.869993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.870027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.870253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.870321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.870575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.870637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.870839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.870872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.871105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.871138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.871380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.871447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.871699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.871776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.871974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.284 [2024-07-15 14:05:37.872007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.284 qpair failed and we were unable to recover it. 00:26:43.284 [2024-07-15 14:05:37.872159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.872192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.872388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.872455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.872708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.872774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.872996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.873029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.873180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.873235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.873434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.873467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.873674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.873724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.873996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.874069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.874342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.874374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.874520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.874581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.874873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.874925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.875162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.875234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.875491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.875523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.875706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.875785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.876058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.876125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.876384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.876417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.876582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.876614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.876835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.876900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.877156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.877224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.877503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.877535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.877702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.877735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.877951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.878019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.878274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.878332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.878475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.878508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.878668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.878700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.878893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.878964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.879225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.879258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.879424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.879456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.879667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.879717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.880026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.880096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.880345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.880378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.880605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.880638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.880794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.880845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.881109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.881177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.881432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.881464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.881640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.881700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.881985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.882036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.882301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.882370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.882630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.285 [2024-07-15 14:05:37.882681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.285 qpair failed and we were unable to recover it. 00:26:43.285 [2024-07-15 14:05:37.883012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.883087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.883382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.883450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.883756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.883808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.884063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.884113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.884420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.884489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.884805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.884857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.885151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.885218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.885514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.885589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.885884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.885939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.886235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.886304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.886560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.886627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.886873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.886923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.887221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.887290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.887520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.887591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.887890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.887958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.888235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.888303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.888576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.888644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.888913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.888981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.889278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.889345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.889654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.889704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.890021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.890099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.890368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.890435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.890691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.890752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.891066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.891137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.891397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.891464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.891778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.891829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.892128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.892197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.892502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.892571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.892834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.892886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.893164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.893231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.893531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.893598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.893858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.893908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.894212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.894280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.894629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.894709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.894909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.894967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.895216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.895285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.895592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.895659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.895993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.896068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.896309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.896378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.896633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.896683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.896993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.286 [2024-07-15 14:05:37.897066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.286 qpair failed and we were unable to recover it. 00:26:43.286 [2024-07-15 14:05:37.897369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.897437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.897735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.897798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.898110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.898160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.898478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.898545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.898881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.898932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.899234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.899301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.899600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.899665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.900058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.900110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.900433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.900502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.900751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.900802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.901072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.901122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.901446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.901515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.901770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.901822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.902053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.902103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.902404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.902471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.902777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.902829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.903120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.903170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.903476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.903545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.903879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.903930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.904203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.904270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.904569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.904644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.904916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.904967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.905174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.905243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.905563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.905632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.905948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.906016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.906335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.906404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.906708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.906774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.907081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.907132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.907425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.907493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.907796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.907847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.908125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.908194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.908474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.908543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.908862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.908913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.909214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.909284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.909609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.909679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.909956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.287 [2024-07-15 14:05:37.910008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.287 qpair failed and we were unable to recover it. 00:26:43.287 [2024-07-15 14:05:37.910300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.910369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.910614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.910664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.910980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.911031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.911300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.911369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.911617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.911667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.911986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.912054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.912360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.912428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.912722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.912787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.913099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.913167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.913472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.913541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.913836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.913887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.914160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.914229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.914552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.914620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.914804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.914856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.915157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.915225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.915542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.915608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.915910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.915961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.916216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.916284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.916542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.916611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.916913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.916981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.917286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.917355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.917637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.917688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.918003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.918072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.918381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.918450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.918726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.918791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.919060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.919117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.919432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.919501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.919799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.919850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.920147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.920215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.920521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.920592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.920884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.920935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.921242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.921311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.921620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.921686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.921995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.922046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.922355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.922424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.922709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.922770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.923068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.923118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.923426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.923496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.923804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.923855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.288 [2024-07-15 14:05:37.924199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.288 [2024-07-15 14:05:37.924267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.288 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.924591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.924659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.924989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.925041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.925344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.925412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.925705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.925774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.926073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.926124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.926386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.926455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.926765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.926816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.927076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.927126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.927400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.927469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.927759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.927809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.928104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.928154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.928471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.928539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.928848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.928907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.929210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.929279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.929581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.929649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.929965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.930016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.930278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.930346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.930642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.930709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.931030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.931081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.931352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.931419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.931710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.931779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.932035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.932085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.932307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.932375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.932678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.932761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.933061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.933112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.933411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.933478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.933807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.933859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.934109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.934176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.934487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.934553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.934852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.934903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.935128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.935196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.935456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.935524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.935778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.935829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.936143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.936211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.936512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.936580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.936872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.936922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.937243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.937310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.937647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.937720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.938045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.938095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.938402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.938480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.289 [2024-07-15 14:05:37.938797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.289 [2024-07-15 14:05:37.938849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.289 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.939166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.939234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.939540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.939608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.939903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.939955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.940231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.940298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.940568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.940636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.940940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.940990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.941316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.941384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.941678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.941727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.942037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.942087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.942393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.942460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.942767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.942820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.943136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.943205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.943516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.943584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.943841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.943894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.944193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.944261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.944605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.944681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.945004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.945056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.945378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.945446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.945747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.945798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.946108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.946158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.946414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.946483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.946777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.946829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.947081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.947131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.947434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.947502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.947806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.947857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.948163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.948231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.948502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.948570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.948867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.948917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.949233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.949302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.949610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.949677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.949998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.950049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.950365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.950433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.950726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.950790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.951083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.951134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.951392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.951461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.951785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.951837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.952134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.952183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.952432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.952499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.952802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.952854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.953184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.953235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.290 [2024-07-15 14:05:37.953544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.290 [2024-07-15 14:05:37.953611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.290 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.953918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.953971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.954284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.954352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.954667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.954735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.955040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.955090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.955402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.955470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.955812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.955864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.956155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.956224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.956525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.956595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.956887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.956938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.957250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.957318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.957622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.957691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.958003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.958054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.958330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.958399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.958698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.958759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.959059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.959110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.959414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.959482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.959787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.959839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.960053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.960121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.960416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.960484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.960756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.960808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.961120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.961169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.961474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.961541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.961812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.961864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.962176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.962245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.962549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.962616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.962921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.962980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.963292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.963361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.963661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.963729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.964036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.964086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.964299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.964367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.964622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.964672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.964970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.965020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.965331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.965400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.965700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.965765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.966074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.966124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.291 [2024-07-15 14:05:37.966394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.291 [2024-07-15 14:05:37.966462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.291 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.966761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.966813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.967118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.967168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.967462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.967530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.967839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.967891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.968148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.968216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.968537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.968604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.968927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.968978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.969249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.969317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.969612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.969680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.969990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.970041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.970324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.970392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.970684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.970734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.971042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.971092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.971404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.971473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.971727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.971793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.972077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.972127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.972441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.972516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.972826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.972877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.973188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.973256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.973564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.973630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.973933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.973985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.974292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.974360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.974673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.974749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.974974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.975024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.975282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.975351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.975663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.975729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.976064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.976114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.976412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.976480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.976729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.976795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.977045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.977096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.977403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.977471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.977770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.977821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.978041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.978091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.978392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.978461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.978759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.978810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.979095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.979146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.979462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.979530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.979829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.979879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.980175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.980225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.980498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.980566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.292 [2024-07-15 14:05:37.980830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.292 [2024-07-15 14:05:37.980881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.292 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.981175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.981243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.981566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.981634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.981919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.981978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.982290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.982361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.982666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.982734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.983004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.983054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.983365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.983433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.983734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.983803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.984100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.984149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.984458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.984525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.984822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.984873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.985180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.985248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.985554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.985622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.985884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.985936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.986154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.986222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.986527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.986595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.986909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.986961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.987271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.987339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.987638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.987705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.988014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.988064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.988372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.988439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.988701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.988763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.989071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.989121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.989377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.989445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.989765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.989817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.990079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.990129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.990384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.990454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.990754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.990805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.991109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.991159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.991465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.991532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.991838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.991888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.992156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.992224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.992502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.992569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.992863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.992915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.993220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.993289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.993598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.293 [2024-07-15 14:05:37.993666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.293 qpair failed and we were unable to recover it. 00:26:43.293 [2024-07-15 14:05:37.993982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:37.994033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:37.994288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:37.994355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:37.994653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:37.994722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:37.995051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:37.995124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:37.995434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:37.995501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:37.995718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:37.995791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:37.996094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:37.996144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:37.996444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:37.996519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:37.996758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:37.996809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:37.997108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:37.997157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:37.997414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:37.997483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:37.997788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:37.997840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:37.998132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:37.998181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:37.998444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:37.998513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:37.998766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:37.998817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:37.999118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:37.999167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:37.999481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:37.999549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:37.999862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:37.999913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:38.000224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:38.000291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:38.000593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:38.000660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:38.000964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:38.001016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:38.001332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:38.001401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:38.001705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:38.001768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:38.002075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:38.002126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:38.002429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:38.002497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:38.002794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:38.002846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:38.003139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:38.003207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:38.003502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:38.003570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:38.003828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:38.003879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:38.004151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:38.004219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:38.004493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:38.004559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:38.004833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:38.004884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:38.005190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:38.005259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:38.005565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:38.005633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:38.005937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:38.006013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:38.006317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:38.006385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:38.006652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:38.006702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:38.007040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:38.007110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:38.007424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:38.007492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:38.007790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:38.007842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:38.008153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.294 [2024-07-15 14:05:38.008222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.294 qpair failed and we were unable to recover it. 00:26:43.294 [2024-07-15 14:05:38.008533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.008601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.008895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.008946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.009249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.009316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.009608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.009676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.009984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.010034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.010328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.010395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.010657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.010707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.011032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.011083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.011282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.011350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.011660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.011730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.012067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.012138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.012448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.012516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.012767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.012819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.013075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.013126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.013433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.013499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.013808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.013859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.014083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.014134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.014434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.014502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.014753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.014804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.015037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.015087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.015253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.015332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.015560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.015629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.015825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.015877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.016077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.016145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.016379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.016447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.016673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.016722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.016953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.017022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.017272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.017342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.017589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.017658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.017925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.017994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.018227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.018296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.018519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.018596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.018838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.018908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.019115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.019183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.019460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.019528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.295 [2024-07-15 14:05:38.019799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.295 [2024-07-15 14:05:38.019851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.295 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.020073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.020142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.020455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.020521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.020814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.020866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.021088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.021158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.021367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.021434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.021678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.021729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.021942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.022018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.022227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.022297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.022509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.022577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.022818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.022889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.023129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.023180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.023394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.023444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.023676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.023727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.023924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.023975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.024160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.024210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.024398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.024447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.024619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.024669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.024857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.024909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.025116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.025165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.025465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.025515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.025790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.025841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.026091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.026160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.026383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.026453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.026659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.026709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.026931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.027001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.027246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.027314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.027607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.027675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.027928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.027998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.028275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.028344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.028637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.028687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.028901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.028970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.029202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.029271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.029594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.029662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.029875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.029946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.030256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.030327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.030649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.030717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.030924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.030991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.031338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.031404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.031651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.031702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.031914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.296 [2024-07-15 14:05:38.031984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.296 qpair failed and we were unable to recover it. 00:26:43.296 [2024-07-15 14:05:38.032214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.032282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.032516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.032589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.032843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.032915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.033255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.033332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.033589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.033641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.033906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.033977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.034339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.034407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.034638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.034698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.034928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.034997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.035190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.035259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.035537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.035606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.035838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.035908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.036130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.036208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.036417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.036468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.036721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.036790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.036965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.037017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.037359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.037411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.037705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.037771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.037972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.038043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.038358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.038428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.038678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.038730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.038963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.039015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.039343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.039416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.039725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.039803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.040042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.040094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.040354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.040423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.040698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.040765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.040935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.040986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.041186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.041260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.041570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.041643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.041934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.041987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.042244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.042316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.042589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.042663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.042896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.042966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.043176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.043245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.043555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.043629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.043853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.043922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.044170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.044241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.044499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.044557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.044815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.044897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.045196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.297 [2024-07-15 14:05:38.045248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.297 qpair failed and we were unable to recover it. 00:26:43.297 [2024-07-15 14:05:38.045509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.045577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.045828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.045900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.046166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.046236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.046588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.046663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.046944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.046996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.047253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.047305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.047599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.047650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.047871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.047941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.048254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.048328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.048614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.048665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.048915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.048986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.049286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.049354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.049540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.049600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.049781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.049834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.050077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.050151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.050382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.050451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.050655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.050706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.050921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.050990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.051269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.051303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.051563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.051614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.051880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.051949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.052149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.052220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.052528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.052598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.052840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.052910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.053191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.053260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.053559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.053629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.053851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.053923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.054147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.054216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.054523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.054591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.054835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.054906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.055149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.055221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.055517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.055587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.055903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.055974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.056323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.056396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.056691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.056754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.056947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.057017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.057320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.057390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.057659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.057711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.057974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.058051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.058423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.058495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.058729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.058805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.298 qpair failed and we were unable to recover it. 00:26:43.298 [2024-07-15 14:05:38.059004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.298 [2024-07-15 14:05:38.059056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.059342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.059410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.059701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.059767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.059967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.060019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.060256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.060326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.060580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.060662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.060914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.060966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.061328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.061397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.061690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.061754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.061994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.062045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.062388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.062463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.062707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.062774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.062985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.063035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.063266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.063337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.063562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.063632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.063866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.063918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.064136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.064206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.064486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.064555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.064838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.064909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.065191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.065263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.065540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.065610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.065847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.065917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.066165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.066217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.066408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.066479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.066700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.066765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.067021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.067102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.067353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.067422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.067669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.067720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.067960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.068030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.068271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.068322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.068575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.068645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.068897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.068967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.069219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.069289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.069541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.069614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.069847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.069916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.070128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.070198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.070452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.070521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.070759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.070811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.071026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.071103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.071338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.071407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.071667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.071718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.299 [2024-07-15 14:05:38.072007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.299 [2024-07-15 14:05:38.072078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.299 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.072407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.072476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.072713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.072780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.073062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.073130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.073424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.073494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.073784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.073837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.074129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.074198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.074425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.074499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.074736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.074799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.075098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.075169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.075409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.075479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.075757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.075821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.076082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.076157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.076434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.076502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.076762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.076815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.077083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.077152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.077409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.077478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.077723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.077790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.078040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.078112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.078365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.078435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.078709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.078792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.079065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.079134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.079396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.079465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.079697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.079762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.080026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.080095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.080337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.080405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.080681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.080765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.081028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.081099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.081367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.081444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.081636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.081687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.081993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.082046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.082259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.082328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.082606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.082674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.082950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.083003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.083259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.083315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.083554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.300 [2024-07-15 14:05:38.083628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.300 qpair failed and we were unable to recover it. 00:26:43.300 [2024-07-15 14:05:38.083900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.083971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.084227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.084300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.084581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.084658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.084909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.084979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.085265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.085333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.085584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.085653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.085934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.086004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.086239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.086308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.086606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.086677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.086971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.087043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.087378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.087446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.087688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.087754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.088053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.088127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.088397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.088465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.088762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.088815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.089135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.089208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.089526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.089596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.089888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.089940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.090236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.090306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.090547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.090615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.090890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.090943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.091181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.091255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.091481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.091550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.091793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.091846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.092093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.092162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.092428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.092498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.092723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.092787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.093061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.093131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.093329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.093421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.093658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.093709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.093996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.094067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.094379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.094449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.094762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.094816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.095072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.095122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.095383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.095452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.095710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.095776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.095997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.096048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.096315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.096388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.096612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.096682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.096930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.096982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.097261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.301 [2024-07-15 14:05:38.097330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.301 qpair failed and we were unable to recover it. 00:26:43.301 [2024-07-15 14:05:38.097603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.097672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.097937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.098008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.098297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.098374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.098605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.098656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.098927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.098980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.099210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.099284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.099564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.099643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.099907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.099978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.100245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.100315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.100640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.100692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.100910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.100980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.101233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.101303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.101584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.101653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.101934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.102004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.102340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.102414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.102666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.102717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.102952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.103023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.103260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.103329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.103554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.103627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.103906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.103978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.104257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.104332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.104586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.104640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.104913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.104989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.105239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.105310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.105574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.105645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.105910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.105980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.106271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.106340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.106543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.106593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.106854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.106930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.107187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.107275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.107533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.107601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.107905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.107975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.302 [2024-07-15 14:05:38.108247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.302 [2024-07-15 14:05:38.108315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.302 qpair failed and we were unable to recover it. 00:26:43.570 [2024-07-15 14:05:38.108615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.570 [2024-07-15 14:05:38.108668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.570 qpair failed and we were unable to recover it. 00:26:43.570 [2024-07-15 14:05:38.108951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.570 [2024-07-15 14:05:38.109023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.570 qpair failed and we were unable to recover it. 00:26:43.570 [2024-07-15 14:05:38.109293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.570 [2024-07-15 14:05:38.109361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.570 qpair failed and we were unable to recover it. 00:26:43.570 [2024-07-15 14:05:38.109618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.570 [2024-07-15 14:05:38.109670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.570 qpair failed and we were unable to recover it. 00:26:43.570 [2024-07-15 14:05:38.109972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.570 [2024-07-15 14:05:38.110039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.570 qpair failed and we were unable to recover it. 00:26:43.570 [2024-07-15 14:05:38.110349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.570 [2024-07-15 14:05:38.110418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.570 qpair failed and we were unable to recover it. 00:26:43.570 [2024-07-15 14:05:38.110639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.570 [2024-07-15 14:05:38.110688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.570 qpair failed and we were unable to recover it. 00:26:43.570 [2024-07-15 14:05:38.111011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.570 [2024-07-15 14:05:38.111084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.570 qpair failed and we were unable to recover it. 00:26:43.570 [2024-07-15 14:05:38.111384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.570 [2024-07-15 14:05:38.111452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.570 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.111768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.111821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.112136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.112188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.112401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.112469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.112773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.112825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.113080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.113152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.113457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.113525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.113824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.113878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.114172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.114240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.114553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.114622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.114860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.114913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.115219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.115288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.115545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.115615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.115866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.115919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.116177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.116247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.116530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.116607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.116914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.116985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.117294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.117363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.117662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.117714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.118011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.118082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.118353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.118422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.118728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.118794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.119096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.119147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.119460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.119530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.119829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.119881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.120141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.120210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.120488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.120558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.120859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.120911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.121224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.121293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.121603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.121672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.121936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.121987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.122301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.122371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.122628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.122697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.122977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.123045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.123318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.123386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.123676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.123728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.124000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.124051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.124308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.124377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.124633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.124694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.124971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.125024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.125331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.125400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.571 [2024-07-15 14:05:38.125702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.571 [2024-07-15 14:05:38.125768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.571 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.126064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.126116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.126433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.126502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.126797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.126850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.127087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.127156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.127409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.127478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.127791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.127844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.128161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.128230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.128539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.128608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.128875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.128926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.129247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.129316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.129582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.129650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.129948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.129999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.130291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.130360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.130681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.130766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.131056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.131126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.131360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.131428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.131717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.131785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.132080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.132131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.132391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.132460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.132765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.132817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.133121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.133172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.133432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.133501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.133809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.133885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.134137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.134207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.134507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.134577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.134873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.134925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.135231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.135299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.135588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.135656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.135968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.136020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.136330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.136400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.136715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.136782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.137083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.137135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.137449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.137519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.137811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.137864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.138140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.138208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.138509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.138578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.138826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.138879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.139188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.139256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.139571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.139641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.139894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.139947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.572 qpair failed and we were unable to recover it. 00:26:43.572 [2024-07-15 14:05:38.140264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.572 [2024-07-15 14:05:38.140333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.140642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.140718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.140999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.141069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.141376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.141445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.141733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.141813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.142070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.142120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.142381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.142450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.142662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.142713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.142997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.143049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.143359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.143427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.143663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.143715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.144043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.144095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.144368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.144437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.144751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.144805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.145066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.145117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.145435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.145504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.145766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.145819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.146147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.146199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.146444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.146513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.146791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.146843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.147110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.147162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.147476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.147544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.147815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.147868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.148134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.148203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.148457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.148525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.148793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.148845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.149104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.149172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.149421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.149489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.149726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.149823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.150125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.150177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.150483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.150552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.150847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.150901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.151170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.151240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.151492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.151560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.151858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.151929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.152196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.152265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.152569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.152639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.152910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.152981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.153285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.153356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.573 [2024-07-15 14:05:38.153654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.573 [2024-07-15 14:05:38.153705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.573 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.154028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.154104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.154384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.154453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.154677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.154726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.155049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.155120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.155417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.155486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.155732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.155797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.156070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.156122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.156432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.156500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.156797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.156850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.157086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.157154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.157469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.157537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.157852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.157905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.158179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.158250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.158592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.158660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.158938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.158991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.159292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.159368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.159686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.159769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.160086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.160137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.160414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.160482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.160727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.160793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.161058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.161110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.161415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.161485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.161797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.161850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.162114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.162166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.162426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.162495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.162751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.162804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.163021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.163073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.163294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.163363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.163663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.163731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.164040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.164092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.164410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.164478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.164753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.164806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.165062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.165114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.165416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.165486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.165810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.165863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.166117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.166169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.166468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.166538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.166820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.166874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.167138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.574 [2024-07-15 14:05:38.167208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.574 qpair failed and we were unable to recover it. 00:26:43.574 [2024-07-15 14:05:38.167462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.167531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.167794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.167847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.168104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.168156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.168367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.168436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.168754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.168806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.169074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.169125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.169396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.169466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.169728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.169796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.170024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.170076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.170373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.170443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.170707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.170773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.171099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.171149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.171458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.171527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.171834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.171887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.172156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.172207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.172513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.172583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.172889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.172941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.173235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.173303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.173598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.173668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.173965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.174017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.174286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.174354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.174618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.174685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.175004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.175056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.175363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.175432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.175750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.175803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.176102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.176153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.176420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.176488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.176799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.176874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.177142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.177211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.177522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.177590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.177838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.177891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.178201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.178270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.575 [2024-07-15 14:05:38.178570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.575 [2024-07-15 14:05:38.178639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.575 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.178940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.178993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.179261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.179329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.179592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.179663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.179981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.180052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.180349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.180417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.180699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.180764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.181020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.181072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.181393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.181461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.181767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.181820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.182123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.182175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.182476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.182544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.182853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.182913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.183239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.183307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.183612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.183681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.184002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.184054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.184328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.184396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.184671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.184721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.185002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.185054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.185303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.185371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.185644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.185713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.186008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.186061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.186375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.186444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.186759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.186812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.187109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.187161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.187416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.187486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.187732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.187798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.188046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.188098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.188333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.188402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.188706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.188790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.189104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.189155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.189450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.189519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.189818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.189888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.190190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.190260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.190528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.190597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.190897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.190950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.191277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.191348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.191589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.191658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.191971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.192024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.192323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.192400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.192657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.192708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.576 qpair failed and we were unable to recover it. 00:26:43.576 [2024-07-15 14:05:38.193032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.576 [2024-07-15 14:05:38.193083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.193342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.193411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.193709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.193783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.194044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.194116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.194423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.194492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.194820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.194892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.195175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.195244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.195516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.195584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.195893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.195945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.196144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.196211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.196478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.196547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.196808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.196878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.197111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.197180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.197437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.197507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.197719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.197782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.198098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.198169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.198421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.198489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.198788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.198842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.199136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.199207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.199457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.199525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.199783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.199836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.200149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.200220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.200525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.200593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.200854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.200907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.201227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.201297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.201604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.201673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.202000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.202053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.202316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.202386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.202691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.202759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.203030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.203081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.203400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.203469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.203766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.203818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.204091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.204143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.204447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.204516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.204765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.204818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.205160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.205212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.205527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.205597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.205847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.205901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.206158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.206228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.206536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.577 [2024-07-15 14:05:38.206607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.577 qpair failed and we were unable to recover it. 00:26:43.577 [2024-07-15 14:05:38.206899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.206952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.207230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.207299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.207596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.207666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.207973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.208042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.208338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.208406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.208671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.208720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.209037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.209089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.209398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.209467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.209733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.209799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.210119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.210188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.210460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.210527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.210824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.210876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.211143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.211212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.211517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.211588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.211814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.211866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.212131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.212200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.212505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.212574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.212875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.212927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.213201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.213269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.213591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.213660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.213951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.214023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.214319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.214388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.214683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.214734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.215063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.215136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.215391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.215460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.215765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.215818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.216114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.216193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.216506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.216574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.216892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.216945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.217237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.217306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.217606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.217674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.217993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.218045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.218300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.218369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.218674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.218756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.578 qpair failed and we were unable to recover it. 00:26:43.578 [2024-07-15 14:05:38.219058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.578 [2024-07-15 14:05:38.219108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.219408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.219476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.219755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.219808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.220084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.220135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.220402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.220472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.220782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.220836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.221102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.221153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.221456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.221525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.221781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.221833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.222110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.222161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.222472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.222541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.222852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.222905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.223212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.223281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.223585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.223655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.223892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.223944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.224193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.224263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.224558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.224627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.224902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.224971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.225315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.225384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.225651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.225708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.225978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.226047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.226376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.226445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.226736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.226799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.227108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.227158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.227426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.227494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.227750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.227803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.228061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.228132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.228433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.228502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.228805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.228880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.229185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.229237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.229550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.229619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.229917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.229969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.579 [2024-07-15 14:05:38.230240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.579 [2024-07-15 14:05:38.230309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.579 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.230622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.230691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.231014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.231067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.231369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.231438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.231730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.231792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.232059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.232110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.232410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.232478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.232784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.232836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.233141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.233192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.233463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.233532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.233834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.233887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.234196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.234266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.234538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.234607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.234875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.234928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.235239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.235315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.235515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.235583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.235886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.235939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.236204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.236273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.236576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.236646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.236953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.237023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.237334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.237403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.237709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.237786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.238095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.238162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.238468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.238536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.238843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.238896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.239197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.239266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.239487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.239556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.239848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.239901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.240169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.240238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.240539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.240606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.240905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.240958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.241216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.241284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.241592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.241661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.241953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.242022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.242282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.242352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.242645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.242696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.242930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.243000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.243261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.243331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.243633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.243702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.244020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.244097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.244402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.580 [2024-07-15 14:05:38.244470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.580 qpair failed and we were unable to recover it. 00:26:43.580 [2024-07-15 14:05:38.244766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.244818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.245085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.245136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.245445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.245514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.245819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.245872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.246133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.246202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.246467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.246535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.246849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.246902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.247155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.247225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.247448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.247517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.247776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.247827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.248130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.248200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.248473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.248541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.248847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.248899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.249215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.249286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.249601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.249671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.249984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.250037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.250308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.250377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.250626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.250678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.250944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.250997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.251302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.251372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.251626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.251677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.251898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.251950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.252255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.252324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.252639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.252709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.252983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.253053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.253354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.253422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.253674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.253726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.254048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.254101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.254410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.254480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.254705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.254772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.255031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.255083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.255370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.255439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.255736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.255802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.256092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.256143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.256478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.256546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.256805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.256859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.257126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.257194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.257540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.257615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.257902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.257957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.258230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.258299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.258607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.258674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.581 qpair failed and we were unable to recover it. 00:26:43.581 [2024-07-15 14:05:38.258981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.581 [2024-07-15 14:05:38.259059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.259379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.259446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.259723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.259786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.260039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.260090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.260274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.260343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.260620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.260687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.260980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.261031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.261262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.261331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.261597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.261666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.261936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.262007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.262309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.262376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.262658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.262709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.263048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.263118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.263430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.263498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.263797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.263850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.264110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.264179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.264485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.264556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.264807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.264860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.265165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.265235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.265550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.265618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.265914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.265967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.266277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.266345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.266655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.266724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.267057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.267132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.267428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.267496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.267787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.267841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.268114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.268183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.268491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.268568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.268823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.268876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.269190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.269261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.269571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.269638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.269943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.269996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.270303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.270375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.270622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.270674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.270958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.271028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.271296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.271364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.271673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.271725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.272043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.272095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.272395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.272462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.272764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.272816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.582 [2024-07-15 14:05:38.273084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.582 [2024-07-15 14:05:38.273137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.582 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.273404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.273472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.273777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.273830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.274131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.274182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.274482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.274553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.274864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.274917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.275167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.275236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.275552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.275620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.275937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.275990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.276293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.276361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.276650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.276701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.277018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.277070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.277373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.277442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.277749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.277802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.278055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.278106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.278435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.278503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.278764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.278816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.279117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.279169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.279467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.279537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.279830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.279883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.280150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.280201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.280472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.280540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.280839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.280891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.281199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.281273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.281592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.281661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.281944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.281996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.282234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.282304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.282585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.282653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.282891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.282962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.283241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.283311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.283596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.283666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.283935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.284006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.284260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.284328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.284608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.284676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.284936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.285007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.285269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.285338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.285612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.285682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.286003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.286076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.286384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.286454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.583 qpair failed and we were unable to recover it. 00:26:43.583 [2024-07-15 14:05:38.286763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.583 [2024-07-15 14:05:38.286816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.287130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.287199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.287458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.287528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.287827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.287880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.288145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.288213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.288474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.288544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.288849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.288919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.289206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.289275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.289574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.289644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.289958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.290028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.290284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.290355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.290654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.290707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.291014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.291084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.291357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.291428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.291707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.291776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.292038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.292112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.292324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.292406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.292720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.292786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.293048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.293099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.293404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.293474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.293774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.293843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.294109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.294178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.294484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.294563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.294855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.294908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.295165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.295200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.295529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.295598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.295866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.295919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.296191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.296260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.296561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.296631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.296946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.297018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.297352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.297420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.584 [2024-07-15 14:05:38.297730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.584 [2024-07-15 14:05:38.297797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.584 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.298067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.298119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.298422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.298491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.298760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.298813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.299115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.299167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.299460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.299529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.299779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.299832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.300126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.300177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.300479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.300547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.300812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.300865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.301110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.301180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.301442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.301510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.301729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.301801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.302063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.302115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.302433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.302502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.302803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.302857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.303131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.303200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.303494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.303564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.303854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.303908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.304176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.304244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.304498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.304567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.304864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.304917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.305176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.305245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.305561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.305632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.305936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.306007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.306307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.306376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.306664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.306715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.307039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.307122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.307424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.307495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.307778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.307830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.308132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.308184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.308414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.308484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.308786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.308839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.309096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.309164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.309445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.309515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.309774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.309826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.310123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.310176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.310480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.310549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.310861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.310914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.311177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.311236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.311487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.311557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.585 [2024-07-15 14:05:38.311866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.585 [2024-07-15 14:05:38.311937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.585 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.312242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.312317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.312581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.312650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.312928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.313001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.313301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.313371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.313667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.313718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.314021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.314091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.314357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.314425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.314684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.314735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.315072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.315142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.315418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.315490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.315809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.315892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.316198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.316275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.316581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.316649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.316954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.317008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.317310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.317381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.317682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.317762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.318060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.318112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.318358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.318427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.318670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.318722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.319051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.319102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.319415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.319485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.319760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.319814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.320067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.320118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.320427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.320496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.320799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.320852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.321117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.321168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.321405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.321475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.321755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.321808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.322067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.322119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.322343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.322414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.322713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.322778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.323034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.323085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.323393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.323466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.323720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.323796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.324101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.324153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.324404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.324474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.324772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.324825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.325091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.325143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.325447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.325518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.325818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.586 [2024-07-15 14:05:38.325871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.586 qpair failed and we were unable to recover it. 00:26:43.586 [2024-07-15 14:05:38.326121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.326190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.326504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.326573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.326881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.326934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.327208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.327276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.327539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.327608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.327852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.327905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.328221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.328291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.328600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.328669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.328961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.329032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.329342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.329412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.329635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.329686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.330000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.330083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.330390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.330460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.330763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.330816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.331075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.331152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.331448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.331519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.331835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.331889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.332168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.332237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.332549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.332620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.332920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.332973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.333279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.333353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.333663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.333734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.334045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.334097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.334350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.334419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.334724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.334787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.335080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.335139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.335398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.335468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.335724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.335790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.336019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.336071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.336350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.336419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.336678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.336730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.337004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.337056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.337325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.337394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.337655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.337724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.338033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.338086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.338397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.338466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.338784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.338838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.339098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.339150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.339420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.339490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.587 [2024-07-15 14:05:38.339802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.587 [2024-07-15 14:05:38.339856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.587 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.340165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.340216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.340519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.340589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.340898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.340952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.341212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.341290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.341527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.341598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.341892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.341944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.342210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.342280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.342585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.342655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.342980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.343052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.343359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.343429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.343730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.343798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.344055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.344107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.344412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.344490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.344785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.344838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.345142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.345212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.345522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.345593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.345898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.345951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.346227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.346296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.346577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.346647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.346937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.346990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.347301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.347371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.347625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.347677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.347990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.348043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.348278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.348352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.348653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.348723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.349008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.349080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.349339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.349410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.349661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.349712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.349989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.350067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.350337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.350409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.350652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.350704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.350984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.351052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.351360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.351429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.351734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.588 [2024-07-15 14:05:38.351810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.588 qpair failed and we were unable to recover it. 00:26:43.588 [2024-07-15 14:05:38.352114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.352183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.352489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.352557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.352863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.352917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.353189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.353260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.353576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.353646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.353947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.354000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.354273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.354343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.354611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.354684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.355002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.355075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.355390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.355458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.355715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.355778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.356049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.356101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.356405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.356473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.356754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.356808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.357100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.357152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.357455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.357525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.357789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.357842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.358063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.358114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.358415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.358500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.358815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.358869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.359135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.359205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.359454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.359524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.359787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.359840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.360100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.360169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.360465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.360534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.360828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.360880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.361184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.361253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.361525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.361595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.361879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.361929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.362181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.362250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.362470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.362540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.362818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.362870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.363105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.363178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.363501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.363570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.363881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.363951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.364259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.364328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.364548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.364599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.364856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.364926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.365244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.365316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.365613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.365664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.365977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.589 [2024-07-15 14:05:38.366049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.589 qpair failed and we were unable to recover it. 00:26:43.589 [2024-07-15 14:05:38.366311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.366382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.366633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.366684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.367008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.367079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.367405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.367473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.367721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.367797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.368104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.368184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.368485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.368555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.368853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.368906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.369181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.369250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.369556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.369625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.369923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.369976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.370278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.370347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.370654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.370689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.371023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.371075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.371382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.371455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.371759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.371811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.372061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.372113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.372415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.372485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.372789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.372842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.373112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.373185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.373489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.373559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.373863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.373916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.374174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.374246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.374499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.374569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.374873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.374925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.375236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.375306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.375557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.375626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.375930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.376001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.376303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.376364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.376667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.376719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.377045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.377116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.377413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.377483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.377755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.377815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.378065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.378117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.378377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.378453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.378763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.378824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.379053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.379105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.379307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.379376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.379585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.379656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.379973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.380026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.380228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.590 [2024-07-15 14:05:38.380297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.590 qpair failed and we were unable to recover it. 00:26:43.590 [2024-07-15 14:05:38.380541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.380592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.380866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.380919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.381207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.381257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.381495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.381547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.381825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.381898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.382134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.382204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.382465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.382516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.382761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.382813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.382991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.383065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.383338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.383406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.383708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.383795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.384032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.384103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.384369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.384421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.384644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.384695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.384905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.384956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.385156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.385220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.385459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.385539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.385840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.385892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.386112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.386189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.386519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.386601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.386870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.386922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.387155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.387223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.387501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.387579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.387798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.387872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.388164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.388245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.388516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.388589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.388805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.388882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.389100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.389182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.389442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.389510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.389749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.389802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.389997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.390072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.390326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.390394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.390637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.390689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.390912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.390984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.391217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.391287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.391557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.391626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.391866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.391941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.392180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.392251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.392536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.392606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.392901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.392970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.591 [2024-07-15 14:05:38.393169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.591 [2024-07-15 14:05:38.393239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.591 qpair failed and we were unable to recover it. 00:26:43.592 [2024-07-15 14:05:38.393483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.592 [2024-07-15 14:05:38.393534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.592 qpair failed and we were unable to recover it. 00:26:43.592 [2024-07-15 14:05:38.393793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.592 [2024-07-15 14:05:38.393846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.592 qpair failed and we were unable to recover it. 00:26:43.592 [2024-07-15 14:05:38.394043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.592 [2024-07-15 14:05:38.394113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.592 qpair failed and we were unable to recover it. 00:26:43.592 [2024-07-15 14:05:38.394356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.592 [2024-07-15 14:05:38.394426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.592 qpair failed and we were unable to recover it. 00:26:43.592 [2024-07-15 14:05:38.394681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.592 [2024-07-15 14:05:38.394732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.592 qpair failed and we were unable to recover it. 00:26:43.592 [2024-07-15 14:05:38.394962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.592 [2024-07-15 14:05:38.395032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.592 qpair failed and we were unable to recover it. 00:26:43.592 [2024-07-15 14:05:38.395351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.592 [2024-07-15 14:05:38.395420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.592 qpair failed and we were unable to recover it. 00:26:43.592 [2024-07-15 14:05:38.395658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.592 [2024-07-15 14:05:38.395709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.592 qpair failed and we were unable to recover it. 00:26:43.592 [2024-07-15 14:05:38.395926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.592 [2024-07-15 14:05:38.395997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.592 qpair failed and we were unable to recover it. 00:26:43.592 [2024-07-15 14:05:38.396212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.592 [2024-07-15 14:05:38.396282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.592 qpair failed and we were unable to recover it. 00:26:43.592 [2024-07-15 14:05:38.396548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.592 [2024-07-15 14:05:38.396615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.592 qpair failed and we were unable to recover it. 00:26:43.592 [2024-07-15 14:05:38.396844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.592 [2024-07-15 14:05:38.396916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.592 qpair failed and we were unable to recover it. 00:26:43.592 [2024-07-15 14:05:38.397210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.592 [2024-07-15 14:05:38.397289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.592 qpair failed and we were unable to recover it. 00:26:43.592 [2024-07-15 14:05:38.397561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.592 [2024-07-15 14:05:38.397628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.592 qpair failed and we were unable to recover it. 00:26:43.592 [2024-07-15 14:05:38.397903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.592 [2024-07-15 14:05:38.397974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.592 qpair failed and we were unable to recover it. 00:26:43.592 [2024-07-15 14:05:38.398383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.592 [2024-07-15 14:05:38.398460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.592 qpair failed and we were unable to recover it. 00:26:43.592 [2024-07-15 14:05:38.398756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.592 [2024-07-15 14:05:38.398811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.592 qpair failed and we were unable to recover it. 00:26:43.592 [2024-07-15 14:05:38.398995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.592 [2024-07-15 14:05:38.399066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.592 qpair failed and we were unable to recover it. 00:26:43.592 [2024-07-15 14:05:38.399346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.592 [2024-07-15 14:05:38.399418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.592 qpair failed and we were unable to recover it. 00:26:43.592 [2024-07-15 14:05:38.399716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.592 [2024-07-15 14:05:38.399806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.592 qpair failed and we were unable to recover it. 00:26:43.592 [2024-07-15 14:05:38.400006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.592 [2024-07-15 14:05:38.400068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.592 qpair failed and we were unable to recover it. 00:26:43.592 [2024-07-15 14:05:38.400360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.592 [2024-07-15 14:05:38.400451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.592 qpair failed and we were unable to recover it. 00:26:43.592 [2024-07-15 14:05:38.400708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.592 [2024-07-15 14:05:38.400779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.592 qpair failed and we were unable to recover it. 00:26:43.592 [2024-07-15 14:05:38.400985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.592 [2024-07-15 14:05:38.401036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.592 qpair failed and we were unable to recover it. 00:26:43.592 [2024-07-15 14:05:38.401287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.592 [2024-07-15 14:05:38.401356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.592 qpair failed and we were unable to recover it. 00:26:43.592 [2024-07-15 14:05:38.401585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.592 [2024-07-15 14:05:38.401658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.592 qpair failed and we were unable to recover it. 00:26:43.592 [2024-07-15 14:05:38.401935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.592 [2024-07-15 14:05:38.401987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.592 qpair failed and we were unable to recover it. 00:26:43.592 [2024-07-15 14:05:38.402226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.402295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.402568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.402638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.402890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.402959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.403241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.403312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.403562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.403630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.403881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.403952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.404161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.404230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.404477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.404547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.405014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.405068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.405378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.405448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.405718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.405783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.406081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.406151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.406451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.406520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.406799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.406851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.407132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.407201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.407442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.407510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.407820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.407874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.408175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.408244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.408498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.408576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.408830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.408883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.409190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.409259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.409522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.409591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.409863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.409934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.410195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.410265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.410529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.410597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.410881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.410952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.411257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.411327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.411626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.411678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.411978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.412049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.412357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.412427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.412675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.412727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.413055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.413132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.413438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.413507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.413814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.413886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.414203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.414272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.414580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.414650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.414959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.415012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.415272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.415342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.415645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.415715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.416011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.416064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.416379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.416448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.416750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.416804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.417102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.417153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.417413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.417481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.417773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.417827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.418136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.418195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.418465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.418532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.418794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.418848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.419161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.419231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.419502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.419571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.419870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.419924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.420226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.420296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.420598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.420667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.420943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.420996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.421280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.421351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.421660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.421730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.422054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.422125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.422433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.422501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.422806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.422859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.423171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.423241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.423489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.423558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.423815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.423868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.424129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.424199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.424454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.424522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.424756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.424818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.425001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.425070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.425291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.861 [2024-07-15 14:05:38.425358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.861 qpair failed and we were unable to recover it. 00:26:43.861 [2024-07-15 14:05:38.425555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.425624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.425849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.425918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.426141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.426210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.426440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.426511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.426728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.426794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.426993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.427063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.427281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.427351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.427590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.427641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.427860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.427930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.428176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.428247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.428471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.428541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.428758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.428833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.429078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.429147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.429362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.429432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.429660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.429711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.429900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.429970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.430209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.430279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.430481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.430551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.430768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.430839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.431104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.431156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.431363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.431432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.431634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.431685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.431951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.432028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.432264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.432333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.432576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.432627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.432844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.432914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.433161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.433229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.433411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.433480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.433696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.433758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.433981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.434058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.434255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.434323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.434535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.434586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.434796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.434849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.435102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.435171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.435348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.435418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.435593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.435644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.435845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.435917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.436075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.436126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.436302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.436353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.436512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.436563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.436782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.436836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.436983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.437035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.437192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.437243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.437428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.437479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.437663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.437713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.438054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.438106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.438290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.438349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.438562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.438613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.438811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.438863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.439053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.439104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.439266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.439318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.439496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.439547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.439763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.439816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.439990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.440058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.440237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.440288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.440481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.440532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.440690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.440756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.440960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.441038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.441203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.441274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.441496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.441545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.441777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.441834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.442054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.442107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.442306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.442356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.442537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.442593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.442806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.442869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.443101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.443152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.443315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.443366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.443577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.443627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.443878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.443948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.444200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.444269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.444425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.444476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.444689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.444753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.444912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.444985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.445180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.445255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.445474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.445525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.445714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.445776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.445978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.446046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.446240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.446308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.446500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.862 [2024-07-15 14:05:38.446550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.862 qpair failed and we were unable to recover it. 00:26:43.862 [2024-07-15 14:05:38.446764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.446815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.446977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.447028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.447210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.447260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.447474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.447524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.447760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.447812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.447981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.448032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.448218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.448269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.448448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.448499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.448720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.448787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.448976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.449027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.449235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.449304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.449531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.449601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.449789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.449843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.450036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.450105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.450325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.450392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.450575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.450626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.450828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.450900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.451122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.451191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.451419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.451489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.451656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.451707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.451954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.452022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.452199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.452275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.452439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.452489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.452674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.452725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.452953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.453004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.453196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.453247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.453462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.453513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.453669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.453719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.453899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.453951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.454162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.454213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.454401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.454451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.454658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.454709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.454874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.454925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.455119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.455186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.455372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.455422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.455606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.455657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.455858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.455910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.456119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.456170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.456381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.456433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.456617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.456668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.456898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.456950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.457170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.457237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.457449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.457499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.457714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.457778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.457995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.458076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.458275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.458345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.458524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.458575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.458767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.458818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.458992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.459066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.459272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.459340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.459550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.459600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.459818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.459871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.460055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.460106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.460293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.460343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.460551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.460601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.460785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.460837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.461051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.461102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.461263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.461314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.461520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.461571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.461761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.461813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.462040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.462110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.462277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.462347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.462558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.462610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.462815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.462889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.463104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.463155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.463374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.463424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.463609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.463660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.463865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.463936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.464146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.464197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.863 qpair failed and we were unable to recover it. 00:26:43.863 [2024-07-15 14:05:38.464386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.863 [2024-07-15 14:05:38.464436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.464646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.464697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.464893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.464944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.465160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.465229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.465437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.465489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.465676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.465727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.465907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.465980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.466186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.466253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.466435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.466485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.466698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.466764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.466925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.466974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.467166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.467217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.467405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.467455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.467669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.467719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.467944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.467995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.468207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.468257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.468489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.468557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.468757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.468809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.469040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.469112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.469305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.469373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.469525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.469583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.469794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.469847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.470039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.470112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.470297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.470348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.470510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.470561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.470754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.470806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.470992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.471043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.471223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.471273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.471480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.471530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.471758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.471810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.471998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.472049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.472239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.472289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.472497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.472547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.472771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.472841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.473045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.473113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.473311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.473380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.473591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.473641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.473839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.473911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.474147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.474215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.474399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.474450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.474636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.474686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.474889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.474959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.475176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.475227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.475446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.475496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.475708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.475787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.475980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.476049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.476261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.476329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.476521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.476579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.476730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.476795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.476988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.477060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.477283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.477351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.477558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.477609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.477810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.477884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.478107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.478175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.478362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.478431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.478641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.478691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.478908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.478976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.479166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.479235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.479447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.479498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.479680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.479731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.479932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.479982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.480184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.480253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.480412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.480463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.480642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.480692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.480928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.480980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.481192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.481243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.481429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.481479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.481659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.481710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.481885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.481937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.482124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.482174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.482386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.482437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.482622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.482672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.482917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.482986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.483187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.483255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.483465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.483516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.483679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.483730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.483978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.484029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.864 qpair failed and we were unable to recover it. 00:26:43.864 [2024-07-15 14:05:38.484223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.864 [2024-07-15 14:05:38.484291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.484440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.484490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.484646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.484696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.484926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.484978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.485201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.485251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.485460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.485510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.485723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.485788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.486009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.486082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.486309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.486376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.486592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.486643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.486840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.486910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.487094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.487162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.487389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.487457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.487675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.487725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.487941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.488009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.488212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.488280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.488517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.488586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.488798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.488851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.489078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.489146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.489370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.489439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.489648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.489699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.489914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.489983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.490154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.490222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.490433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.490502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.490657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.490707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.490895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.490969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.491170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.491238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.491399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.491450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.491662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.491713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.491965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.492016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.492228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.492279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.492489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.492539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.492763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.492815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.493041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.493110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.493333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.493402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.493586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.493636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.493825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.493899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.494134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.494203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.494393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.494469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.494679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.494729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.494977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.495046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.495273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.495341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.495556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.495606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.495776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.495828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.496053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.496122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.496307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.496376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.496587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.496637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.496828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.496897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.497058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.497131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.497316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.497385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.497546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.497596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.497810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.497863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.498088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.498139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.498325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.498377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.498557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.498608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.498820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.498872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.499059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.499110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.499266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.499316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.499499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.499550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.499761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.499814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.500015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.500089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.500274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.500325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.500503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.500554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.500712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.500801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.501019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.501094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.865 qpair failed and we were unable to recover it. 00:26:43.865 [2024-07-15 14:05:38.501289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.865 [2024-07-15 14:05:38.501347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.501498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.501548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.501761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.501828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.502021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.502071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.502258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.502308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.502515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.502566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.502725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.502791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.502987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.503038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.503218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.503270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.503451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.503501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.503647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.503698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.503941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.503993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.504159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.504209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.504397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.504447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.504602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.504653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.504850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.504903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.505082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.505133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.505321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.505371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.505528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.505578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.505786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.505839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.506061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.506129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.506345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.506395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.506568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.506619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.506810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.506886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.507098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.507149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.507329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.507380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.507568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.507618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.507807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.507888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.508102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.508152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.508361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.508412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.508597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.508647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.508836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.508905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.509129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.509197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.509393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.509444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.509666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.509717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.509959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.510030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.510205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.510274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.510453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.510503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.510653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.510703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.510905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.510956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.511174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.511225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.511414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.511465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.511677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.511728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.511967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.512019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.512238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.512289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.512468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.512517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.512699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.512766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.512933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.513005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.513237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.513312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.513493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.513543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.513787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.513839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.514052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.514104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.514297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.514349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.514559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.514611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.514835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.514906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.515112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.515182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.515367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.515418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.515586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.515638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.515859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.515913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.516129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.516180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.516398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.516449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.516663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.516714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.516912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.516963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.517156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.517206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.517359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.517410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.517589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.517639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.517807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.517861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.518055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.518128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.518346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.518406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.518599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.518651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.518847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.518899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.519055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.519106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.519322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.519374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.519562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.519613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.519790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.519843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.520043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.520095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.520281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.520333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.520533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.520585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.866 [2024-07-15 14:05:38.520785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.866 [2024-07-15 14:05:38.520838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.866 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.521024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.521075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.521234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.521258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.521422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.521474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.521666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.521717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.521889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.521941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.522121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.522171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.522349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.522400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.522583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.522634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.522831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.522902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.523102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.523170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.523383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.523434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.523638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.523689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.523859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.523933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.524160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.524227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.524415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.524466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.524653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.524704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.524891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.524950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.525159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.525209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.525416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.525466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.525650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.525700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.525899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.525951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.526139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.526207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.526390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.526441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.526623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.526674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.526905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.526957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.527153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.527220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.527402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.527452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.527638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.527689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.527908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.527961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.528169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.528220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.528418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.528469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.528613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.528663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.528863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.528915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.529096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.529147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.529361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.529412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.529636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.529687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.529923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.529975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.530210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.530277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.530456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.530507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.530661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.530712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.530919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.530987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.531228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.531297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.531480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.531531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.531715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.531793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.531962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.532013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.532200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.532251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.532432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.532482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.532655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.532706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.532952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.533003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.533177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.533228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.533412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.533462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.533649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.533700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.533904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.533956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.534144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.534194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.534358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.534409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.534616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.534667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.534902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.534954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.535145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.535196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.535379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.535429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.535610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.535661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.535882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.535935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.536125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.536176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.536396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.536465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.536675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.536726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.536976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.537045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.537209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.537281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.537464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.537514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.537700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.537764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.537995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.538071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.538287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.538356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.538565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.538616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.538814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.538888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.867 qpair failed and we were unable to recover it. 00:26:43.867 [2024-07-15 14:05:38.539074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.867 [2024-07-15 14:05:38.539145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.539340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.539411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.539593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.539644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.539872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.539943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.540152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.540221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.540417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.540468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.540658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.540708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.540926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.540978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.541189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.541240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.541447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.541497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.541677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.541728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.541936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.542009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.542249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.542323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.542531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.542582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.542767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.542821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.543014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.543084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.543305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.543356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.543548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.543599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.543812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.543887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.544112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.544163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.544373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.544423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.544622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.544673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.544841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.544893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.545051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.545102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.545289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.545339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.545521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.545572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.545772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.545806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.546030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.546081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.546297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.546347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.546492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.546542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.546725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.546792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.547011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.547061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.547231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.547281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.547487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.547538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.547697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.547760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.547964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.548034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.548245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.548297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.548483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.548533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.548692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.548768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.548989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.549046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.549264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.549314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.549526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.549584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.549803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.549856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.550079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.550150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.550340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.550415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.550573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.550623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.550837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.550906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.551142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.551192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.551353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.551404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.551610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.551660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.551905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.551973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.552172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.552241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.552452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.552503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.552668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.552718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.552919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.552971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.553161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.553212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.553369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.553419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.553598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.553648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.553817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.553870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.554053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.554103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.554302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.554353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.554549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.554600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.554765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.554817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.555003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.555053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.555204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.555254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.555442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.555493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.555702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.555773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.555958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.556008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.556219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.556270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.556451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.556502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.556651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.556701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.556916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.556968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.557186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.557237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.557429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.557480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.557663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.557713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.557909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.557960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.558150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.558201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.558374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.558424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.558588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.558640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.868 [2024-07-15 14:05:38.558834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.868 [2024-07-15 14:05:38.558907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.868 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.559137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.559209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.559391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.559441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.559650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.559701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.559939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.560009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.560233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.560304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.560502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.560553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.560785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.560854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.561072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.561156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.561326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.561394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.561573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.561623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.561841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.561912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.562099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.562168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.562350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.562400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.562583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.562641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.562792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.562844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.563029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.563097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.563276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.563327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.563504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.563555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.563735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.563798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.564009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.564059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.564243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.564294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.564454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.564504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.564685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.564735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.564981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.565032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.565226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.565276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.565437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.565488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.565673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.565723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.565968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.566021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.566208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.566259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.566452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.566503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.566685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.566751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.566911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.566962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.567172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.567223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.567382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.567432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.567611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.567662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.567885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.567936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.568121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.568171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.568371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.568422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.568584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.568634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.568823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.568898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.569083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.569153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.569383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.569434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.569619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.569669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.569876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.569944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.570137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.570207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.570418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.570469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.570629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.570679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.570852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.570904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.571057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.571110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.571331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.571382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.571570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.571621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.571818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.571895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.572107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.572159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.572351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.572402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.572618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.572675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.572932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.573004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.573218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.573287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.573443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.573493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.573654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.573715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.573928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.573999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.574197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.574269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.574449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.574499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.574659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.574710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.574913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.574964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.575153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.575203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.575388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.575439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.575647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.575698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.575898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.575950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.576144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.576195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.576375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.576425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.576614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.576664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.576868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.576920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.577137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.869 [2024-07-15 14:05:38.577188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.869 qpair failed and we were unable to recover it. 00:26:43.869 [2024-07-15 14:05:38.577375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.577444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.577631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.577682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.577876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.577929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.578139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.578195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.578360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.578429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.578630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.578681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.578917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.578969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.579155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.579205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.579400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.579457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.579643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.579692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.579866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.579917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.580110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.580161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.580341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.580391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.580576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.580627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.580814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.580890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.581095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.581165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.581347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.581398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.581589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.581640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.581859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.581930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.582157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.582226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.582437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.582488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.582669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.582720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.582978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.583046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.583217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.583288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.583451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.583502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.583691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.583753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.583919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.583970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.584158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.584209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.584393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.584443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.584627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.584678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.584922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.584973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.585159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.585209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.585384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.585435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.585644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.585695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.585918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.585969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.586157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.586234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.586414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.586465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.586675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.586725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.586934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.587002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.587213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.587264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.587475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.587526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.587751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.587803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.588034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.588106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.588329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.588399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.588609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.588659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.588876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.588945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.589169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.589238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.589443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.589510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.589700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.589763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.589950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.590020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.590180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.590251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.590459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.590509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.590722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.590787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.591015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.591091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.591278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.591345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.591532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.591583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.591733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.591798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.592022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.592099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.592285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.592354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.592566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.592617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.592782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.592835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.593025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.593102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.593311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.593362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.593576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.593627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.593804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.593878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.594117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.594191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.594415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.594483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.594666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.594717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.594921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.594990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.595182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.595250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.595432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.595483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.595664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.595715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.595917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.595967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.596197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.596248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.596470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.596537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.596759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.596811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.597018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.597090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.597316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.597385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.870 [2024-07-15 14:05:38.597565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.870 [2024-07-15 14:05:38.597616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.870 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.597821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.597899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.598116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.598184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.598411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.598479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.598633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.598684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.598930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.599000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.599212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.599280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.599497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.599548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.599789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.599842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.600000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.600080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.600277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.600346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.600527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.600577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.600759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.600810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.601052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.601103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.601295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.601365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.601552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.601603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.601789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.601842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.602044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.602094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.602265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.602315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.602471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.602521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.602730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.602795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.603007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.603058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.603253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.603322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.603524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.603574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.603759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.603811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.604004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.604062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.604255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.604306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.604486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.604537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.604755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.604808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.605023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.605092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.605282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.605352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.605564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.605614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.605839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.605925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.606110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.606180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.606364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.606415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.606588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.606639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.606826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.606898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.607088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.607157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.607381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.607449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.607645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.607696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.607922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.607990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.608146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.608218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.608404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.608455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.608610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.608660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.608877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.608929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.609146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.609197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.609381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.609431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.609588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.609639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.609827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.609898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.610092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.610161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.610350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.610400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.610612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.610662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.610897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.610975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.611182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.611251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.611459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.611509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.611698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.611760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.611988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.612064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.612261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.612329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.612538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.612588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.612809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.612883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.613095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.613162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.613340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.613408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.613591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.613641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.613863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.613937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.614159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.614228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.614408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.614458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.614677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.614729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.614953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.615022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.615228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.615295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.615477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.615528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.615762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.615814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.616000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.616051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.616222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.871 [2024-07-15 14:05:38.616292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.871 qpair failed and we were unable to recover it. 00:26:43.871 [2024-07-15 14:05:38.616476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.616526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.616751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.616803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.617014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.617065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.617261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.617329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.617547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.617617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.617845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.617916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.618097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.618173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.618367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.618438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.618618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.618669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.618850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.618901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.619085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.619136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.619316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.619367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.619577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.619628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.619819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.619872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.620084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.620134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.620338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.620407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.620627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.620677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.620874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.620944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.621138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.621208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.621378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.621451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.621643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.621695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.621888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.621940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.622126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.622177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.622346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.622397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.622607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.622658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.622901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.622953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.623133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.623184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.623370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.623420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.623589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.623640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.623827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.623898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.624119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.624189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.624397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.624447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.624629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.624679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.624910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.624981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.625204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.625273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.625485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.625535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.625754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.625806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.625999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.626069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.626302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.626370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.626583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.626634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.626861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.626931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.627158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.627228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.627418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.627486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.627699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.627765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.627998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.628078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.628280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.628350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.628556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.628606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.628807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.628891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.629089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.629157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.629385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.629454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.629612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.629663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.629848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.629918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.630138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.630189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.630410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.630479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.630661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.630711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.630949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.631018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.631216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.631285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.631493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.631543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.631765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.631818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.632031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.632101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.632325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.632394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.632598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.632649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.632841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.632912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.633110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.633178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.633364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.633415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.633600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.633650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.633867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.633938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.634133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.634184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.634394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.634445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.634626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.634677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.634915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.634967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.635192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.635261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.635481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.635531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.635691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.635758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.635955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.636035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.636267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.636335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.636519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.636570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.636727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.872 [2024-07-15 14:05:38.636797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.872 qpair failed and we were unable to recover it. 00:26:43.872 [2024-07-15 14:05:38.637021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.637090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.637261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.637331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.637515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.637566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.637763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.637814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.637998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.638051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.638259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.638310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.638527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.638577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.638733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.638799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.639015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.639065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.639288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.639339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.639536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.639586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.639768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.639821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.640017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.640099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.640289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.640358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.640548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.640598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.640759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.640811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.641034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.641104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.641333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.641400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.641581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.641632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.641819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.641892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.642076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.642146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.642326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.642377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.642554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.642605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.642821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.642901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.643065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.643116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.643271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.643321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.643504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.643554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.643728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.643799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.643960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.644011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.644187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.644237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.644423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.644473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.644667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.644718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.644922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.644974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.645152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.645202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.645389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.645439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.645660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.645711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.645908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.645959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.646142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.646193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.646401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.646451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.646668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.646719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.646961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.647029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.647252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.647321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.647511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.647562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.647769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.647823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.648051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.648121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.648317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.648386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.648596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.648647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.648878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.648948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.649155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.649224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.649440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.649491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.649675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.649726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.649935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.650004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.650217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.650285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.650497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.650548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.650812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.650865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.651064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.651132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.651358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.651426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.651586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.651636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.651830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.651901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.652092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.652160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.652317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.652390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.652579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.652630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.652856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.652926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.653152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.653221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.653412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.653463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.653625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.653676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.653936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.654007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.654241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.654310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.654499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.654549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.654780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.654832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.655056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.655124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.655353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.655422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.655607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.655658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.655867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.655937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.656111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.656178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.656397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.656448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.656667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.656717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.656961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.657030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.657261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.873 [2024-07-15 14:05:38.657331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.873 qpair failed and we were unable to recover it. 00:26:43.873 [2024-07-15 14:05:38.657512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.657563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.657760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.657813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.658029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.658079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.658235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.658315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.658529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.658579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.658788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.658840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.659043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.659112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.659298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.659348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.659560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.659610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.659836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.659907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.660095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.660145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.660353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.660403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.660610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.660668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.660902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.660971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.661193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.661262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.661447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.661498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.661679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.661729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.661945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.662015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.662237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.662305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.662516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.662567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.662766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.662819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.663050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.663119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.663303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.663373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.663557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.663609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.663815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.663866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.664080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.664130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.664340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.664410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.664619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.664669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.664882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.664951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.665131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.665182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.665353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.665403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.665597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.665648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.665840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.665892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.666076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.666127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.666303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.666353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.666499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.666549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.666767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.666818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.667014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.667065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.667245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.667296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.667493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.667551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.667766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.667819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.668005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.668056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.668236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.668286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.668477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.668528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.668750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.668802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.669029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.669106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.669333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.669401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.669612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.669662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.669867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.669937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.670133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.670201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.670386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.670453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.670643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.670693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.670892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.670961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.671161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.671231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.671423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.671474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.671688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.671753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.671920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.671970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.672126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.672177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.672385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.672435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.672587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.672637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.672836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.672906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.673132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.673201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.673425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.673476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.673660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.673710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.673891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.673965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.674189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.674257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.674471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.674522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.674716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.674797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.674991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.675060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.675231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.675300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.675516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.675567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.675726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.874 [2024-07-15 14:05:38.675789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.874 qpair failed and we were unable to recover it. 00:26:43.874 [2024-07-15 14:05:38.676018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.676088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.676315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.676384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.676566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.676617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.676806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.676880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.677070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.677139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.677360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.677428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.677593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.677643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.677833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.677904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.678124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.678176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.678392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.678443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.678631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.678681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.678880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.678950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.679158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.679209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.679400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.679450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.679637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.679687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.679885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.679954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.680151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.680220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.680429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.680479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.680636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.680686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.680863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.680915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.681124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.681175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.681344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.681394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.681612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.681662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.681864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.681916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.682100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.682151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.682364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.682414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.682598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.682649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.682824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.682876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.683085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.683136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.683345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.683395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.683605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.683655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.683894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.683964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.684188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.684256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.684444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.684495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.684686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.684751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.684946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.685021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.685224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.685293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.685509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.685560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.685768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.685821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.686052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.686122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.686355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.686423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.686611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.686661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.686910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.686981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.687175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.687243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.687451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.687502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.687695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.687758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.687969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.688038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.688211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.688285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.688492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.688542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.688750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.688802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.688995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.689063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.689222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.689272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.689475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.689525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.689751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.689804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.690016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.690066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:43.875 [2024-07-15 14:05:38.690253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.875 [2024-07-15 14:05:38.690304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:43.875 qpair failed and we were unable to recover it. 00:26:44.150 [2024-07-15 14:05:38.690505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.150 [2024-07-15 14:05:38.690574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.150 qpair failed and we were unable to recover it. 00:26:44.150 [2024-07-15 14:05:38.690726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.690792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.690948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.690999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.691219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.691269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.691443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.691494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.691677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.691728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.691959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.692016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.692202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.692253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.692419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.692470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.692668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.692718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.692950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.693001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.693197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.693266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.693458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.693509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.693694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.693780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.693982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.694033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.694246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.694296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.694490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.694540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.694716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.694798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.694964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.695036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.695222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.695273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.695458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.695508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.695716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.695782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.695967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.696018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.696189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.696240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.696434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.696484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.696692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.696755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.696952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.697003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.697215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.697266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.697455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.697525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.697733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.697797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.697984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.698055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.698227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.698295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.698474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.698524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.698750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.698809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.698999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.699051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.699229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.699279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.699496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.699547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.699764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.699816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.699974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.700025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.700234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.700286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.700467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.151 [2024-07-15 14:05:38.700517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.151 qpair failed and we were unable to recover it. 00:26:44.151 [2024-07-15 14:05:38.700683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.700734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.700965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.701016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.701213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.701264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.701470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.701520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.701693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.701755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.701952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.702003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.702236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.702306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.702519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.702571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.702776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.702829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.703035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.703106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.703334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.703404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.703588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.703638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.703851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.703921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.704130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.704198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.704384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.704454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.704635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.704686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.704872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.704942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.705167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.705235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.705419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.705470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.705618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.705668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.705902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.705954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.706102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.706153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.706333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.706383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.706566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.706617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.706826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.706879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.707064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.707114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.707325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.707376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.707560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.707610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.707782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.707835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.708039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.708109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.708321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.708371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.708592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.708643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.708857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.708928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.709164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.709233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.709446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.709497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.709707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.709771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.710010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.710080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.710300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.710369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.710554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.710604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.710823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.710897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.711116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.152 [2024-07-15 14:05:38.711184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.152 qpair failed and we were unable to recover it. 00:26:44.152 [2024-07-15 14:05:38.711361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.711429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.711641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.711691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.711932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.712002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.712161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.712234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.712445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.712496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.712716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.712782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.713015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.713087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.713322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.713390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.713599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.713650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.713846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.713917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.714142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.714211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.714433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.714502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.714715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.714782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.715004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.715081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.715282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.715351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.715559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.715610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.715834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.715906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.716098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.716167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.716395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.716464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.716644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.716702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.716919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.716987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.717139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.717189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.717370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.717421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.717629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.717681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.717849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.717900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.718089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.718140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.718321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.718372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.718590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.718641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.718853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.718929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.719117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.719187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.719370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.719420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.719600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.719650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.719846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.719917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.720128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.720198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.720359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.720410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.720588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.720639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.720829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.720904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.721112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.721181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.721403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.721454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.721638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.153 [2024-07-15 14:05:38.721689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.153 qpair failed and we were unable to recover it. 00:26:44.153 [2024-07-15 14:05:38.721925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.721995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.722184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.722252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.722437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.722487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.722696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.722756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.722991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.723060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.723284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.723353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.723564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.723622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.723829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.723900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.724089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.724159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.724384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.724453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.724639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.724690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.724893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.724962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.725187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.725255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.725473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.725525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.725720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.725787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.725986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.726053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.726251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.726321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.726503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.726554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.726714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.726798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.726995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.727046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.727240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.727308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.727495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.727546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.727726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.727795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.727978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.728029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.728214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.728264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.728446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.728497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.728683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.728734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.728963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.729014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.729211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.729262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.729476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.729526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.729753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.729806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.730003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.730072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.730258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.730327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.730546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.730597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.730809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.730884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.731079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.731146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.731365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.731432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.731595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.731645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.731812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.731864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.732017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.732067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.154 [2024-07-15 14:05:38.732274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.154 [2024-07-15 14:05:38.732325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.154 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.732513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.732564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.732762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.732814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.732963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.733014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.733194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.733245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.733453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.733503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.733688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.733750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.733973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.734024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.734173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.734224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.734434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.734485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.734674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.734725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.734943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.735012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.735221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.735272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.735459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.735510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.735669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.735719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.735975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.736043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.736261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.736330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.736487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.736538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.736717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.736783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.736969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.737020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.737201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.737253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.737445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.737496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.737661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.737712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.737942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.737994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.738181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.738232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.738395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.738446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.738654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.738705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.738937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.738989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.739198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.739249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.739473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.739524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.739704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.739770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.739956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.740026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.740208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.740258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.740444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.740495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.740673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.155 [2024-07-15 14:05:38.740751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.155 qpair failed and we were unable to recover it. 00:26:44.155 [2024-07-15 14:05:38.740937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.741005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.741187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.741238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.741427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.741478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.741692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.741757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.741975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.742026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.742237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.742288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.742488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.742556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.742787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.742840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.743039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.743114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.743282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.743353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.743539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.743591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.743762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.743814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.744036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.744111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.744346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.744416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.744604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.744656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.744855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.744924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.745109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.745178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.745357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.745409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.745621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.745672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.745879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.745950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.746152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.746203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.746389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.746440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.746656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.746707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.746940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.746991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.747189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.747259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.747445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.747496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.747650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.747709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.747958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.748010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.748198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.748267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.748478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.748529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.748751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.748804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.749016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.749067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.749299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.749368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.749519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.749570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.749766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.749818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.749978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.750059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.750279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.750313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.750479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.750530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.750714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.750776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.750931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.750982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.751176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.156 [2024-07-15 14:05:38.751228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.156 qpair failed and we were unable to recover it. 00:26:44.156 [2024-07-15 14:05:38.751448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.751499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.751687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.751748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.751960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.751994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.752137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.752171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.752350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.752383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.752509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.752542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.752691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.752724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.752910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.752944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.753090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.753123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.753300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.753334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.753477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.753511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.753666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.753717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.753950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.753989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.754134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.754169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.754346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.754381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.754536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.754571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.754721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.754791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.754971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.755005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.755158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.755192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.755365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.755399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.755575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.755609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.755730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.755775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.755946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.755980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.756128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.756162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.756311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.756345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.756495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.756528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.756710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.756752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.756930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.756964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.757137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.757171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.757288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.757321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.757497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.757530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.757705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.757748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.757940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.757972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.758118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.758150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.758297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.758329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.758480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.758512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.758659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.758692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.758846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.758879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.759003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.759035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.759179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.759212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.157 [2024-07-15 14:05:38.759335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.157 [2024-07-15 14:05:38.759367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.157 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.759517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.759549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.759723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.759763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.759914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.759946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.760067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.760098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.760252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.760284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.760454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.760486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.760631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.760663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.760829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.760862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.761007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.761038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.761178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.761209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.761329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.761360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.761505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.761536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.761709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.761747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.761887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.761919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.762052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.762083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.762226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.762257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.762423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.762455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.762626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.762657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.762798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.762830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.762954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.762985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.763125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.763156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.763297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.763328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.763499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.763530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.763647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.763679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.763832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.763863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.764005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.764036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.764154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.764184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.764327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.764357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.764462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.764491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.764636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.764666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.764817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.764849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.764965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.764995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.765115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.765145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.765314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.765345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.765494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.765524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.765664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.765693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.765871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.765902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.766034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.766063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.766215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.766245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.766375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.766408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.158 [2024-07-15 14:05:38.766548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.158 [2024-07-15 14:05:38.766578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.158 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.766713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.766749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.766896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.766925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.767071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.767101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.767205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.767235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.767371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.767400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.767547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.767577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.767721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.767758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.767938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.767967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.768131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.768159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.768297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.768325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.768458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.768487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.768610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.768639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.768784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.768814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.768928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.768957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.769057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.769086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.769205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.769233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.769373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.769401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.769504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.769533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.769645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.769674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.769892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.769921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.770063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.770092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.770237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.770265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.770438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.770467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.770637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.770666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.770808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.770837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.770981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.771013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.771116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.771144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.771306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.771333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.771451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.771479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.771592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.771620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.771758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.771786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.771895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.771923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.772058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.772086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.772227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.772254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.772378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.159 [2024-07-15 14:05:38.772406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.159 qpair failed and we were unable to recover it. 00:26:44.159 [2024-07-15 14:05:38.772533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.772560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.772676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.772703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.772818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.772847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.772962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.772989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.773127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.773156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.773288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.773315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.773450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.773477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.773614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.773642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.773790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.773817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.773947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.773975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.774111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.774138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.774273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.774300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.774407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.774435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.774574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.774600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.774743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.774771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.774894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.774920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.775055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.775081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.775216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.775243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.775375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.775402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.775537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.775563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.775686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.775713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.775832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.775859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.775978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.776004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.776137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.776163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.776311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.776338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.776495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.776521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.776688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.776714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.776855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.776881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.777022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.777048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.777182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.777209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.777368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.777394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.777535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.777562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.777706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.777733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.777852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.777878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.777979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.778006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.778146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.778172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.778304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.778330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.778467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.778494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.778633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.778659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.778785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.778813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.778975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.160 [2024-07-15 14:05:38.779001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.160 qpair failed and we were unable to recover it. 00:26:44.160 [2024-07-15 14:05:38.779108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.779133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.779259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.779286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.779399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.779424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.779559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.779585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.779754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.779782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.779888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.779913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.780081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.780107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.780234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.780259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.780392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.780417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.780512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.780537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.780673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.780698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.780841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.780867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.780968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.780994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.781141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.781167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.781328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.781354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.781487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.781512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.781647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.781674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.781795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.781826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.781933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.781958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.782123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.782152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.782255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.782279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.782412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.782437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.782570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.782596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.782732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.782766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.782925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.782952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.783083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.783109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.783269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.783294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.783417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.783458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.783594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.783621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.783756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.783782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.783942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.783968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.784075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.784101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.784263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.784288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.784401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.784426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.784567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.784593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.784692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.784718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.784840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.784867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.784982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.785007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.785139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.785165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.785273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.785298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.161 [2024-07-15 14:05:38.785427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.161 [2024-07-15 14:05:38.785451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.161 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.785596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.785621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.785750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.785776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.786509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.786540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.786794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.786826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.786937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.786963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.787123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.787149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.787283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.787309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.787446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.787472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.787586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.787612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.787772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.787800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.787906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.787930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.788063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.788107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.788309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.788334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.788494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.788518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.788646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.788685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.788823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.788850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.788975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.789000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.789229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.789256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.789384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.789410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.789562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.789588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.789696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.789721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.789857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.789883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.789990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.790016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.790152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.790193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.790309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.790349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.790454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.790480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.790599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.790626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.790725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.790758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.790853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.790879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.790989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.791014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.791107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.791143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.791283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.791309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.791416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.791440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.791559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.791583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.791713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.791759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.791868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.791893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.791986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.792010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.792242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.792267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.792427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.792452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.792543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.792568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.162 qpair failed and we were unable to recover it. 00:26:44.162 [2024-07-15 14:05:38.792688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.162 [2024-07-15 14:05:38.792716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.792851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.792878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.792974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.792998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.793135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.793161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.793312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.793352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.793492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.793517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.793669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.793714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.793896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.793923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.794030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.794059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.794193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.794218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.794378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.794434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.794577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.794621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.794772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.794798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.794906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.794931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.795043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.795074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.795196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.795221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.795377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.795411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.795514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.795545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.795697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.795731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.795931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.795957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.796084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.796129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.796352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.796386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.796565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.796598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.796735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.796768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.796895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.796920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.797066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.797098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.797238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.797272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.797418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.797474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.797630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.797673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.797830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.797856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.797962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.797986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.798167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.798207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.798340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.798372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.798526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.798558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.798675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.798727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.798861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.163 [2024-07-15 14:05:38.798887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.163 qpair failed and we were unable to recover it. 00:26:44.163 [2024-07-15 14:05:38.799033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.799065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.799203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.799235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.799390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.799444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.799620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.799662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.799807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.799833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.799941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.799966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.800072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.800098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.800205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.800232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.800370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.800403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.800553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.800578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.800759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.800786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.800917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.800963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.801967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.801997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.802185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.802212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.802367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.802394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.802537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.802563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.802670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.802696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.802833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.802860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.802970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.802996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.803163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.803189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.803400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.803426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.803559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.803584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.803702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.803733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.803848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.803874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.804025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.804052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.804145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.804169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.804290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.804316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.804444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.804470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.804606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.804632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.804762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.804789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.804930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.804957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.805108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.805134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.805233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.805259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.805365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.805391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.805520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.805545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.805679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.805705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.805849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.805876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.805976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.806002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.806101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.806127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.806254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.164 [2024-07-15 14:05:38.806280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.164 qpair failed and we were unable to recover it. 00:26:44.164 [2024-07-15 14:05:38.806384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.806410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.806564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.806590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.806708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.806743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.806853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.806878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.807006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.807033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.807137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.807162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.807261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.807286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.807417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.807443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.808130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.808160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.808270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.808300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.808428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.808454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.808584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.808610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.808771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.808798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.808907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.808933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.809049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.809076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.809215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.809241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.809351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.809377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.809500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.809526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.809632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.809658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.809758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.809784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.809917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.809943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.810598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.810627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.810811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.810838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.810977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.811004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.811111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.811137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.811240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.811266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.811381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.811407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.811527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.811555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.811670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.811696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.811849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.811876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.812011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.812037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.812191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.812217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.812336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.812362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.812501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.812527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.812674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.812714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.812838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.812864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.813072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.813098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.813227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.813253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.813377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.813403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.813518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.813545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.813698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.165 [2024-07-15 14:05:38.813724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.165 qpair failed and we were unable to recover it. 00:26:44.165 [2024-07-15 14:05:38.813843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.813870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.813970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.813996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.814201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.814235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.814374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.814400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.814492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.814517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.814615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.814641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.814765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.814792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.814895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.814921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.815012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.815038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.815131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.815156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.815285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.815311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.815409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.815434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.815535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.815571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.815686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.815711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.815829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.815854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.815964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.815989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.816116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.816141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.816275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.816301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.816475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.816501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.816657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.816690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.816845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.816882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.817012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.817062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.817297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.817331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.817489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.817516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.817743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.817770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.817871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.817896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.818002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.818027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.818190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.818236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.818418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.818443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.818574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.818600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.818722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.818757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.818857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.818883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.819024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.819071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.819228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.819253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.819381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.819407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.819546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.819571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.819675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.819705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.819814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.819840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.819954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.819978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.820110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.820134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.820299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.166 [2024-07-15 14:05:38.820325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.166 qpair failed and we were unable to recover it. 00:26:44.166 [2024-07-15 14:05:38.820424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.820450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.820543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.820568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.820668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.820694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.820813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.820839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.820951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.820977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.821074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.821099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.821223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.821249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.821351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.821376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.821514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.821540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.821676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.821702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.821840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.821867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.821966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.821992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.822096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.822121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.822212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.822239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.822345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.822370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.822501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.822527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.822653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.822678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.822833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.822860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.822966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.822992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.823090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.823116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.823214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.823239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.823387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.823413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.823509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.823538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.823671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.823697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.823833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.823860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.823962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.823985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.824088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.824113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.824219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.824246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.824352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.824377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.824488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.824514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.824670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.824696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.824809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.824835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.824944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.824970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.825073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.825098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.825223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.825249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.825376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.825401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.825507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.825532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.825681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.825706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.825804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.825830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.825930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.825956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.826079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.167 [2024-07-15 14:05:38.826104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.167 qpair failed and we were unable to recover it. 00:26:44.167 [2024-07-15 14:05:38.826231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.826257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.826382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.826407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.826536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.826562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.826666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.826691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.826793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.826819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.826913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.826940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.827073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.827098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.827213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.827238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.827342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.827371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.827472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.827496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.827649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.827675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.827785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.827811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.827915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.827941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.828036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.828061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.828149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.828174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.828325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.828350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.828454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.828480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.828605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.828630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.828777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.828804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.828908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.828934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.829068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.829094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.829221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.829247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.829377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.829402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.829535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.829561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.829691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.829716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.829829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.829854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.829951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.829976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.830130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.830156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.830253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.830278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.830429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.830455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.830584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.830610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.830745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.830771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.830868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.830893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.168 [2024-07-15 14:05:38.830993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.168 [2024-07-15 14:05:38.831018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.168 qpair failed and we were unable to recover it. 00:26:44.169 [2024-07-15 14:05:38.831146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.169 [2024-07-15 14:05:38.831171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.169 qpair failed and we were unable to recover it. 00:26:44.169 [2024-07-15 14:05:38.831817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.169 [2024-07-15 14:05:38.831847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.169 qpair failed and we were unable to recover it. 00:26:44.169 [2024-07-15 14:05:38.831966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.169 [2024-07-15 14:05:38.831993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.169 qpair failed and we were unable to recover it. 00:26:44.169 [2024-07-15 14:05:38.832148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.169 [2024-07-15 14:05:38.832174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.169 qpair failed and we were unable to recover it. 00:26:44.169 [2024-07-15 14:05:38.832274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.169 [2024-07-15 14:05:38.832300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.169 qpair failed and we were unable to recover it. 00:26:44.169 [2024-07-15 14:05:38.832445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.169 [2024-07-15 14:05:38.832471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.169 qpair failed and we were unable to recover it. 00:26:44.169 [2024-07-15 14:05:38.832601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.169 [2024-07-15 14:05:38.832627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.169 qpair failed and we were unable to recover it. 00:26:44.169 [2024-07-15 14:05:38.832722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.169 [2024-07-15 14:05:38.832758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.169 qpair failed and we were unable to recover it. 00:26:44.169 [2024-07-15 14:05:38.832861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.169 [2024-07-15 14:05:38.832886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.169 qpair failed and we were unable to recover it. 00:26:44.169 [2024-07-15 14:05:38.832992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.169 [2024-07-15 14:05:38.833018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.169 qpair failed and we were unable to recover it. 00:26:44.169 [2024-07-15 14:05:38.833120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.169 [2024-07-15 14:05:38.833144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.169 qpair failed and we were unable to recover it. 00:26:44.169 [2024-07-15 14:05:38.833278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.169 [2024-07-15 14:05:38.833304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.169 qpair failed and we were unable to recover it. 00:26:44.169 [2024-07-15 14:05:38.833400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.169 [2024-07-15 14:05:38.833425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.169 qpair failed and we were unable to recover it. 00:26:44.169 [2024-07-15 14:05:38.833551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.169 [2024-07-15 14:05:38.833577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.169 qpair failed and we were unable to recover it. 00:26:44.169 [2024-07-15 14:05:38.833705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.169 [2024-07-15 14:05:38.833730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.169 qpair failed and we were unable to recover it. 00:26:44.169 [2024-07-15 14:05:38.833863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.169 [2024-07-15 14:05:38.833889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.169 qpair failed and we were unable to recover it. 00:26:44.169 [2024-07-15 14:05:38.834001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.169 [2024-07-15 14:05:38.834026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.169 qpair failed and we were unable to recover it. 00:26:44.169 [2024-07-15 14:05:38.834169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.169 [2024-07-15 14:05:38.834194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.169 qpair failed and we were unable to recover it. 00:26:44.169 [2024-07-15 14:05:38.834321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.169 [2024-07-15 14:05:38.834347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.169 qpair failed and we were unable to recover it. 00:26:44.169 [2024-07-15 14:05:38.834475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.169 [2024-07-15 14:05:38.834502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.169 qpair failed and we were unable to recover it. 00:26:44.169 [2024-07-15 14:05:38.834599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.169 [2024-07-15 14:05:38.834624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.169 qpair failed and we were unable to recover it. 00:26:44.169 [2024-07-15 14:05:38.834728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.169 [2024-07-15 14:05:38.834763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.169 qpair failed and we were unable to recover it. 00:26:44.169 [2024-07-15 14:05:38.834870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.169 [2024-07-15 14:05:38.834896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.169 qpair failed and we were unable to recover it. 00:26:44.169 [2024-07-15 14:05:38.835024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.169 [2024-07-15 14:05:38.835051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.169 qpair failed and we were unable to recover it. 00:26:44.169 [2024-07-15 14:05:38.835157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.169 [2024-07-15 14:05:38.835182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.169 qpair failed and we were unable to recover it. 00:26:44.169 [2024-07-15 14:05:38.835297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.169 [2024-07-15 14:05:38.835323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.169 qpair failed and we were unable to recover it. 00:26:44.169 [2024-07-15 14:05:38.835423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.169 [2024-07-15 14:05:38.835448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.169 qpair failed and we were unable to recover it. 00:26:44.169 [2024-07-15 14:05:38.835579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.170 [2024-07-15 14:05:38.835605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.170 qpair failed and we were unable to recover it. 00:26:44.170 [2024-07-15 14:05:38.835735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.170 [2024-07-15 14:05:38.835769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.170 qpair failed and we were unable to recover it. 00:26:44.170 [2024-07-15 14:05:38.835881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.170 [2024-07-15 14:05:38.835907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.170 qpair failed and we were unable to recover it. 00:26:44.170 [2024-07-15 14:05:38.836031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.170 [2024-07-15 14:05:38.836057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.170 qpair failed and we were unable to recover it. 00:26:44.170 [2024-07-15 14:05:38.836181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.170 [2024-07-15 14:05:38.836206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.170 qpair failed and we were unable to recover it. 00:26:44.170 [2024-07-15 14:05:38.836304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.170 [2024-07-15 14:05:38.836330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.170 qpair failed and we were unable to recover it. 00:26:44.170 [2024-07-15 14:05:38.836487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.170 [2024-07-15 14:05:38.836511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.170 qpair failed and we were unable to recover it. 00:26:44.170 [2024-07-15 14:05:38.836639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.170 [2024-07-15 14:05:38.836665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.170 qpair failed and we were unable to recover it. 00:26:44.170 [2024-07-15 14:05:38.836786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.170 [2024-07-15 14:05:38.836812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.170 qpair failed and we were unable to recover it. 00:26:44.170 [2024-07-15 14:05:38.836922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.170 [2024-07-15 14:05:38.836947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.170 qpair failed and we were unable to recover it. 00:26:44.170 [2024-07-15 14:05:38.837081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.170 [2024-07-15 14:05:38.837107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.170 qpair failed and we were unable to recover it. 00:26:44.170 [2024-07-15 14:05:38.837240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.170 [2024-07-15 14:05:38.837265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.170 qpair failed and we were unable to recover it. 00:26:44.170 [2024-07-15 14:05:38.837366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.170 [2024-07-15 14:05:38.837392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.170 qpair failed and we were unable to recover it. 00:26:44.170 [2024-07-15 14:05:38.837519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.170 [2024-07-15 14:05:38.837544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.170 qpair failed and we were unable to recover it. 00:26:44.170 [2024-07-15 14:05:38.837750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.170 [2024-07-15 14:05:38.837777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.170 qpair failed and we were unable to recover it. 00:26:44.170 [2024-07-15 14:05:38.837886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.170 [2024-07-15 14:05:38.837914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.170 qpair failed and we were unable to recover it. 00:26:44.170 [2024-07-15 14:05:38.838020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.170 [2024-07-15 14:05:38.838046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.170 qpair failed and we were unable to recover it. 00:26:44.170 [2024-07-15 14:05:38.838151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.170 [2024-07-15 14:05:38.838176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.170 qpair failed and we were unable to recover it. 00:26:44.170 [2024-07-15 14:05:38.838281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.170 [2024-07-15 14:05:38.838307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.170 qpair failed and we were unable to recover it. 00:26:44.170 [2024-07-15 14:05:38.838408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.170 [2024-07-15 14:05:38.838433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.170 qpair failed and we were unable to recover it. 00:26:44.170 [2024-07-15 14:05:38.838538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.170 [2024-07-15 14:05:38.838565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.170 qpair failed and we were unable to recover it. 00:26:44.170 [2024-07-15 14:05:38.838716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.170 [2024-07-15 14:05:38.838749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.170 qpair failed and we were unable to recover it. 00:26:44.170 [2024-07-15 14:05:38.838856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.170 [2024-07-15 14:05:38.838882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.170 qpair failed and we were unable to recover it. 00:26:44.170 [2024-07-15 14:05:38.839010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.170 [2024-07-15 14:05:38.839036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.170 qpair failed and we were unable to recover it. 00:26:44.170 [2024-07-15 14:05:38.839163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.170 [2024-07-15 14:05:38.839188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.170 qpair failed and we were unable to recover it. 00:26:44.170 [2024-07-15 14:05:38.839316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.170 [2024-07-15 14:05:38.839342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.170 qpair failed and we were unable to recover it. 00:26:44.170 [2024-07-15 14:05:38.839470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.170 [2024-07-15 14:05:38.839496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.170 qpair failed and we were unable to recover it. 00:26:44.171 [2024-07-15 14:05:38.839677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.171 [2024-07-15 14:05:38.839702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.171 qpair failed and we were unable to recover it. 00:26:44.171 [2024-07-15 14:05:38.839818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.171 [2024-07-15 14:05:38.839844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.171 qpair failed and we were unable to recover it. 00:26:44.171 [2024-07-15 14:05:38.839949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.171 [2024-07-15 14:05:38.839974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.171 qpair failed and we were unable to recover it. 00:26:44.171 [2024-07-15 14:05:38.840075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.171 [2024-07-15 14:05:38.840100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.171 qpair failed and we were unable to recover it. 00:26:44.171 [2024-07-15 14:05:38.840226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.171 [2024-07-15 14:05:38.840251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.171 qpair failed and we were unable to recover it. 00:26:44.171 [2024-07-15 14:05:38.840352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.171 [2024-07-15 14:05:38.840377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.171 qpair failed and we were unable to recover it. 00:26:44.171 [2024-07-15 14:05:38.840510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.171 [2024-07-15 14:05:38.840535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.171 qpair failed and we were unable to recover it. 00:26:44.171 [2024-07-15 14:05:38.840668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.171 [2024-07-15 14:05:38.840693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.171 qpair failed and we were unable to recover it. 00:26:44.171 [2024-07-15 14:05:38.840808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.171 [2024-07-15 14:05:38.840834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.171 qpair failed and we were unable to recover it. 00:26:44.171 [2024-07-15 14:05:38.840931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.171 [2024-07-15 14:05:38.840956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.171 qpair failed and we were unable to recover it. 00:26:44.171 [2024-07-15 14:05:38.841053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.171 [2024-07-15 14:05:38.841079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.171 qpair failed and we were unable to recover it. 00:26:44.171 [2024-07-15 14:05:38.841183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.171 [2024-07-15 14:05:38.841208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.171 qpair failed and we were unable to recover it. 00:26:44.171 [2024-07-15 14:05:38.841332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.171 [2024-07-15 14:05:38.841358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.171 qpair failed and we were unable to recover it. 00:26:44.171 [2024-07-15 14:05:38.841484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.171 [2024-07-15 14:05:38.841508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.171 qpair failed and we were unable to recover it. 00:26:44.171 [2024-07-15 14:05:38.841616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.171 [2024-07-15 14:05:38.841641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.171 qpair failed and we were unable to recover it. 00:26:44.171 [2024-07-15 14:05:38.841749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.171 [2024-07-15 14:05:38.841778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.171 qpair failed and we were unable to recover it. 00:26:44.171 [2024-07-15 14:05:38.841872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.171 [2024-07-15 14:05:38.841898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.171 qpair failed and we were unable to recover it. 00:26:44.171 [2024-07-15 14:05:38.841997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.171 [2024-07-15 14:05:38.842021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.171 qpair failed and we were unable to recover it. 00:26:44.171 [2024-07-15 14:05:38.842108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.171 [2024-07-15 14:05:38.842134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.171 qpair failed and we were unable to recover it. 00:26:44.171 [2024-07-15 14:05:38.842261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.171 [2024-07-15 14:05:38.842286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.171 qpair failed and we were unable to recover it. 00:26:44.171 [2024-07-15 14:05:38.842390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.171 [2024-07-15 14:05:38.842416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.171 qpair failed and we were unable to recover it. 00:26:44.171 [2024-07-15 14:05:38.842511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.171 [2024-07-15 14:05:38.842536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.171 qpair failed and we were unable to recover it. 00:26:44.171 [2024-07-15 14:05:38.842632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.171 [2024-07-15 14:05:38.842658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.171 qpair failed and we were unable to recover it. 00:26:44.171 [2024-07-15 14:05:38.842769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.171 [2024-07-15 14:05:38.842795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.171 qpair failed and we were unable to recover it. 00:26:44.171 [2024-07-15 14:05:38.842903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.171 [2024-07-15 14:05:38.842929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.171 qpair failed and we were unable to recover it. 00:26:44.171 [2024-07-15 14:05:38.843062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.171 [2024-07-15 14:05:38.843087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.171 qpair failed and we were unable to recover it. 00:26:44.171 [2024-07-15 14:05:38.843217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.171 [2024-07-15 14:05:38.843242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.171 qpair failed and we were unable to recover it. 00:26:44.172 [2024-07-15 14:05:38.843371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.172 [2024-07-15 14:05:38.843397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.172 qpair failed and we were unable to recover it. 00:26:44.172 [2024-07-15 14:05:38.843498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.172 [2024-07-15 14:05:38.843523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.172 qpair failed and we were unable to recover it. 00:26:44.172 [2024-07-15 14:05:38.843653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.172 [2024-07-15 14:05:38.843679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.172 qpair failed and we were unable to recover it. 00:26:44.172 [2024-07-15 14:05:38.843793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.172 [2024-07-15 14:05:38.843819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.172 qpair failed and we were unable to recover it. 00:26:44.172 [2024-07-15 14:05:38.843923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.172 [2024-07-15 14:05:38.843949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.172 qpair failed and we were unable to recover it. 00:26:44.172 [2024-07-15 14:05:38.844091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.172 [2024-07-15 14:05:38.844116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.172 qpair failed and we were unable to recover it. 00:26:44.172 [2024-07-15 14:05:38.844244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.172 [2024-07-15 14:05:38.844269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.172 qpair failed and we were unable to recover it. 00:26:44.172 [2024-07-15 14:05:38.844364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.172 [2024-07-15 14:05:38.844389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.172 qpair failed and we were unable to recover it. 00:26:44.172 [2024-07-15 14:05:38.844506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.172 [2024-07-15 14:05:38.844531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.172 qpair failed and we were unable to recover it. 00:26:44.172 [2024-07-15 14:05:38.844690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.172 [2024-07-15 14:05:38.844716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.172 qpair failed and we were unable to recover it. 00:26:44.172 [2024-07-15 14:05:38.844847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.172 [2024-07-15 14:05:38.844889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.172 qpair failed and we were unable to recover it. 00:26:44.172 [2024-07-15 14:05:38.845007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.172 [2024-07-15 14:05:38.845035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.172 qpair failed and we were unable to recover it. 00:26:44.172 [2024-07-15 14:05:38.845194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.172 [2024-07-15 14:05:38.845221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.172 qpair failed and we were unable to recover it. 00:26:44.172 [2024-07-15 14:05:38.845351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.172 [2024-07-15 14:05:38.845377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.172 qpair failed and we were unable to recover it. 00:26:44.172 [2024-07-15 14:05:38.845510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.172 [2024-07-15 14:05:38.845537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.172 qpair failed and we were unable to recover it. 00:26:44.172 [2024-07-15 14:05:38.845642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.172 [2024-07-15 14:05:38.845673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.172 qpair failed and we were unable to recover it. 00:26:44.172 [2024-07-15 14:05:38.845780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.172 [2024-07-15 14:05:38.845807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.172 qpair failed and we were unable to recover it. 00:26:44.172 [2024-07-15 14:05:38.845913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.172 [2024-07-15 14:05:38.845939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.172 qpair failed and we were unable to recover it. 00:26:44.172 [2024-07-15 14:05:38.846037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.172 [2024-07-15 14:05:38.846063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.172 qpair failed and we were unable to recover it. 00:26:44.172 [2024-07-15 14:05:38.846168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.172 [2024-07-15 14:05:38.846195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.172 qpair failed and we were unable to recover it. 00:26:44.172 [2024-07-15 14:05:38.846297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.172 [2024-07-15 14:05:38.846324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.172 qpair failed and we were unable to recover it. 00:26:44.172 [2024-07-15 14:05:38.846439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.172 [2024-07-15 14:05:38.846465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.172 qpair failed and we were unable to recover it. 00:26:44.172 [2024-07-15 14:05:38.846574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.172 [2024-07-15 14:05:38.846601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.172 qpair failed and we were unable to recover it. 00:26:44.172 [2024-07-15 14:05:38.846724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.172 [2024-07-15 14:05:38.846757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.172 qpair failed and we were unable to recover it. 00:26:44.172 [2024-07-15 14:05:38.846884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.172 [2024-07-15 14:05:38.846910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.172 qpair failed and we were unable to recover it. 00:26:44.172 [2024-07-15 14:05:38.847012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.172 [2024-07-15 14:05:38.847038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.172 qpair failed and we were unable to recover it. 00:26:44.172 [2024-07-15 14:05:38.847987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.172 [2024-07-15 14:05:38.848018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 14:05:38.848229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 14:05:38.848256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 14:05:38.848367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 14:05:38.848394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 14:05:38.848597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 14:05:38.848624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 14:05:38.848777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 14:05:38.848804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 14:05:38.848914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 14:05:38.848940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 14:05:38.849104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 14:05:38.849131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 14:05:38.849289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 14:05:38.849315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 14:05:38.849446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 14:05:38.849472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 14:05:38.849575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 14:05:38.849601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 14:05:38.849760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 14:05:38.849787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 14:05:38.849887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 14:05:38.849913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 14:05:38.850010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 14:05:38.850036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 14:05:38.850159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 14:05:38.850184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 14:05:38.850312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 14:05:38.850339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 14:05:38.850471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 14:05:38.850498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 14:05:38.850638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 14:05:38.850665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 14:05:38.850774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 14:05:38.850802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 14:05:38.850897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 14:05:38.850923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 14:05:38.851031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 14:05:38.851057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 14:05:38.851182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 14:05:38.851208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 14:05:38.851316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 14:05:38.851342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 14:05:38.851438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 14:05:38.851475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 14:05:38.851640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 14:05:38.851666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 14:05:38.851762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 14:05:38.851789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 14:05:38.851887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 14:05:38.851913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.173 [2024-07-15 14:05:38.852018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.173 [2024-07-15 14:05:38.852044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.173 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 14:05:38.852170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 14:05:38.852196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 14:05:38.852319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 14:05:38.852346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 14:05:38.852474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 14:05:38.852504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 14:05:38.852635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 14:05:38.852662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 14:05:38.852800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 14:05:38.852827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 14:05:38.852938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 14:05:38.852965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 14:05:38.853093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 14:05:38.853119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 14:05:38.853832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 14:05:38.853863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 14:05:38.853982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 14:05:38.854009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 14:05:38.854117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 14:05:38.854144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 14:05:38.854273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 14:05:38.854300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 14:05:38.854409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 14:05:38.854435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 14:05:38.854568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 14:05:38.854594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 14:05:38.854831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 14:05:38.854860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 14:05:38.854967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 14:05:38.854993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 14:05:38.855120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 14:05:38.855146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 14:05:38.855267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 14:05:38.855294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 14:05:38.855457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 14:05:38.855483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 14:05:38.855582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 14:05:38.855609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 14:05:38.855713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.174 [2024-07-15 14:05:38.855745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.174 qpair failed and we were unable to recover it. 00:26:44.174 [2024-07-15 14:05:38.855852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 14:05:38.855879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 14:05:38.855982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 14:05:38.856009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 14:05:38.856144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 14:05:38.856170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 14:05:38.856298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 14:05:38.856324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 14:05:38.856430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 14:05:38.856456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 14:05:38.856619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 14:05:38.856646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 14:05:38.856777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 14:05:38.856805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 14:05:38.856904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 14:05:38.856931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 14:05:38.857065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 14:05:38.857092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 14:05:38.857199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 14:05:38.857226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 14:05:38.857381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 14:05:38.857408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 14:05:38.857539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 14:05:38.857565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 14:05:38.857668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 14:05:38.857694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 14:05:38.857796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 14:05:38.857823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 14:05:38.857927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 14:05:38.857955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 14:05:38.858087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 14:05:38.858113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 14:05:38.858249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 14:05:38.858276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 14:05:38.858386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 14:05:38.858412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 14:05:38.858599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 14:05:38.858631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 14:05:38.858790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 14:05:38.858817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 14:05:38.858922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 14:05:38.858948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 14:05:38.859045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 14:05:38.859072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 14:05:38.859202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 14:05:38.859229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 14:05:38.859389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 14:05:38.859422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 14:05:38.859568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 14:05:38.859600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 14:05:38.859741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 14:05:38.859769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 14:05:38.859876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.175 [2024-07-15 14:05:38.859901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.175 qpair failed and we were unable to recover it. 00:26:44.175 [2024-07-15 14:05:38.860012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 14:05:38.860058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 14:05:38.860226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 14:05:38.860258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 14:05:38.860378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 14:05:38.860423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 14:05:38.860563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 14:05:38.860595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 14:05:38.860706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 14:05:38.860747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 14:05:38.860868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 14:05:38.860894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 14:05:38.860995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 14:05:38.861021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 14:05:38.861116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 14:05:38.861143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 14:05:38.861281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 14:05:38.861312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 14:05:38.861463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 14:05:38.861495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 14:05:38.861608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 14:05:38.861652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 14:05:38.861784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 14:05:38.861810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 14:05:38.861917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 14:05:38.861944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 14:05:38.862049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 14:05:38.862082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 14:05:38.862219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 14:05:38.862245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 14:05:38.862385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 14:05:38.862425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 14:05:38.862560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 14:05:38.862593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 14:05:38.862733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 14:05:38.862784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 14:05:38.862886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 14:05:38.862911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 14:05:38.863027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 14:05:38.863053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 14:05:38.863165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 14:05:38.863198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 14:05:38.863338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 14:05:38.863371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 14:05:38.863512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 14:05:38.863548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 14:05:38.863667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 14:05:38.863699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 14:05:38.863835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 14:05:38.863863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 14:05:38.863961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 14:05:38.863987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 14:05:38.864082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 14:05:38.864123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.176 [2024-07-15 14:05:38.864295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.176 [2024-07-15 14:05:38.864326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.176 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 14:05:38.864433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 14:05:38.864465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 14:05:38.864601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 14:05:38.864648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 14:05:38.864797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 14:05:38.864823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 14:05:38.864929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 14:05:38.864955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 14:05:38.865113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 14:05:38.865146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 14:05:38.865330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 14:05:38.865362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 14:05:38.865497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 14:05:38.865528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 14:05:38.865670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 14:05:38.865703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 14:05:38.865839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 14:05:38.865867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 14:05:38.865977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 14:05:38.866002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 14:05:38.866173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 14:05:38.866198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 14:05:38.866312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 14:05:38.866345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 14:05:38.866486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 14:05:38.866518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 14:05:38.866705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 14:05:38.866746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 14:05:38.866865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 14:05:38.866891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 14:05:38.866992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 14:05:38.867032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 14:05:38.867200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 14:05:38.867233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 14:05:38.867403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 14:05:38.867436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 14:05:38.867545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 14:05:38.867577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 14:05:38.867759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 14:05:38.867803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 14:05:38.867908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 14:05:38.867934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 14:05:38.868074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 14:05:38.868099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 14:05:38.868225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 14:05:38.868272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 14:05:38.868441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 14:05:38.868474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 14:05:38.868618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 14:05:38.868651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 14:05:38.868776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 14:05:38.868802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 14:05:38.868914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 14:05:38.868939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 14:05:38.869078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.177 [2024-07-15 14:05:38.869111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.177 qpair failed and we were unable to recover it. 00:26:44.177 [2024-07-15 14:05:38.869241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.869288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.869441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.869468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.869605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.869638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.869767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.869809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.869938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.869965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.870103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.870128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.870223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.870252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.870411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.870444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.870612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.870643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.870782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.870808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.870905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.870932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.871073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.871115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.871236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.871269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.871489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.871521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.871666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.871698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.871849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.871881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.871991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.872023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.872191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.872225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.872388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.872420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.872562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.872593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.872708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.872751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.872874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.872905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.873017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.873050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.873263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.873295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.873407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.873438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.873606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.873637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.873780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.873814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.873933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.873965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.874073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.874105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.874249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.874282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.874397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.874429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.874569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.874595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.874725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.874759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.874858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.874883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.874981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.875007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.875144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.875169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.178 qpair failed and we were unable to recover it. 00:26:44.178 [2024-07-15 14:05:38.875265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.178 [2024-07-15 14:05:38.875290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.875415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.875441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.875567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.875593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.875731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.875769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.875882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.875914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.876051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.876083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.876199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.876230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.876376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.876420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.876563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.876596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.876743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.876775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.876896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.876933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.877076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.877107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.877308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.877340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.877452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.877483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.877588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.877619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.877759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.877791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.877906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.877939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.878077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.878109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.878301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.878334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.878515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.878547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.878702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.878735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.878857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.878890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.879065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.879108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.879265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.879293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.879436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.879468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.879647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.879679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.879797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.879825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.879936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.879963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.880082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.880130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.880305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.880338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.880454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.880486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.880625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.880657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.880807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.880836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.880968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.880995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.881235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.881275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.881455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.881487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.881603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.881634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.881798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.881827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.881937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.881965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.882097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.179 [2024-07-15 14:05:38.882130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.179 qpair failed and we were unable to recover it. 00:26:44.179 [2024-07-15 14:05:38.882307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.882341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.882477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.882509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.882626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.882657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.882774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.882819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.882944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.882972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.883114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.883147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.883389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.883421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.883539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.883571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.883742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.883776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.883898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.883930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.884042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.884079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.884288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.884320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.884497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.884529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.884707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.884756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.884908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.884957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.885087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.885124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.885342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.885375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.885516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.885547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.885711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.885751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.885863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.885896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.886025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.886073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.886253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.886284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.886419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.886451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.886639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.886672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.886833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.886883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.887006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.887038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.887221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.887254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.887396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.887428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.887654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.887687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.887847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.887897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.888075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.888107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.888285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.888333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.888474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.888506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.888689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.888722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.888854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.180 [2024-07-15 14:05:38.888903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.180 qpair failed and we were unable to recover it. 00:26:44.180 [2024-07-15 14:05:38.889022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.889055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.889176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.889210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.889387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.889436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.889574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.889605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.889761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.889794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.889948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.889997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.890128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.890175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.890326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.890356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.890473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.890506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.890620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.890652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.890807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.890841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.890974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.891006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.891147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.891179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.891365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.891397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.891564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.891596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.891727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.891770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.891890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.891922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.892070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.892103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.892267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.892299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.892442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.892474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.892627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.892659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.892840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.892870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.892981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.893010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.893135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.893164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.893286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.893314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.893453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.893482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.893610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.893639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.893803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.893833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.893960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.893988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.894126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.894155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.894283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.894312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.894525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.894558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.894699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.894732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.894893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.894922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.895055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.895088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.895241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.895274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.895439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.895471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.895582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.895614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.895794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.895823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.895934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.895963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.181 [2024-07-15 14:05:38.896141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.181 [2024-07-15 14:05:38.896174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.181 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.896306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.896338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.896506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.896538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.896679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.896711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.896903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.896932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.897099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.897132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.897268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.897321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.897433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.897466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.897640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.897672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.897840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.897869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.898003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.898048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.898282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.898334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.898479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.898511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.898652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.898685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.898810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.898839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.898972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.899005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.899212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.899272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.899406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.899439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.899581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.899613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.899796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.899826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.899928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.899959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.900094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.900123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.900282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.900311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.900446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.900475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.900630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.900659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.900829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.900856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.900976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.901005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.901135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.901164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.901330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.901362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.901516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.901539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.901687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.901719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.901858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.901887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.901994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.902023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.902213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.902245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.902352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.902385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.902522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.902554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.902696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.902729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.902872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.902901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.903047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.903079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.903248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.903280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.182 [2024-07-15 14:05:38.903419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.182 [2024-07-15 14:05:38.903451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.182 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.903627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.903658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.903797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.903826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.903962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.903990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.904162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.904212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.904379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.904410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.904548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.904580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.904747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.904812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.904952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.904980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.905124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.905157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.905291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.905323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.905465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.905496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.905666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.905697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.905843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.905873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.905976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.906004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.906186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.906222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.906391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.906423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.906589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.906621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.906795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.906825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.906960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.906988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.907135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.907189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.907305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.907337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.907502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.907534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.907672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.907704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.907838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.907868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.907975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.908003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.908161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.908194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.908337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.908368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.908505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.908536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.908652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.908685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.908852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.908882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.908985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.909014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.909135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.909167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.909346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.909378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.909508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.909539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.909714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.909754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.909873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.909901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.910076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.910107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.910321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.910373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.910545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.910577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.910729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.183 [2024-07-15 14:05:38.910813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.183 qpair failed and we were unable to recover it. 00:26:44.183 [2024-07-15 14:05:38.910952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.910981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.911147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.911195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.911354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.911406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.911537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.911569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.911729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.911785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.911942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.911970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.912142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.912174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.912311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.912342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.912496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.912527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.912724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.912782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.912892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.912921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.913137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.913178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.913285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.913317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.913458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.913491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.913664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.913701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.913887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.913916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.914110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.914168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.914311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.914371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.914512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.914544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.914691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.914735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.914892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.914920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.915052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.915092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.915227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.915259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.915393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.915424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.915600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.915631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.915856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.915886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.915984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.916013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.916171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.916203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.916385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.916417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.916572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.916604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.916804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.916846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.917015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.917061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.917238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.917286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.917461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.917492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.184 [2024-07-15 14:05:38.917653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.184 [2024-07-15 14:05:38.917685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.184 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.917874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.917910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.918044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.918090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.918206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.918267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.918419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.918461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.918609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.918641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.918758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.918804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.918976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.919004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.919176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.919226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.919379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.919411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.919590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.919622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.919827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.919857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.920049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.920104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.920260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.920319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.920513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.920544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.920693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.920729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.920979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.921017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.921179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.921211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.921348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.921389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.921635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.921676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.921871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.921915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.922117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.922173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.922310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.922365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.922512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.922544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.922753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.922809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.922986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.923043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.923253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.923308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.923512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.923566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.923748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.923807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.923977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.924006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.924208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.924260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.924458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.924514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.924775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.924821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.925099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.925147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.925353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.925413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.925583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.925616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.925766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.925818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.925989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.926017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.926190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.926243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.926492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.926523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.185 [2024-07-15 14:05:38.926678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.185 [2024-07-15 14:05:38.926720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.185 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.926860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.926909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.927066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.927125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.927297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.927349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.927525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.927571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.927699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.927732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.927877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.927908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.928071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.928102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.928329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.928372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.928523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.928555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.928720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.928778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.928948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.928981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.929129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.929161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.929339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.929370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.929536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.929568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.929771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.929806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.930025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.930082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.930237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.930289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.930486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.930544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.930691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.930723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.930885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.930938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.931088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.931192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.931452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.931484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.931604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.931637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.931782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.931814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.932014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.932047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.932246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.932278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.932436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.932469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.932662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.932695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.932841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.932900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.933136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.933169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.933347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.933407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.933553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.933593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.933763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.933805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.933947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.933996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.934145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.934200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.934351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.934393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.934612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.934652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.934829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.934876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.186 qpair failed and we were unable to recover it. 00:26:44.186 [2024-07-15 14:05:38.935068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.186 [2024-07-15 14:05:38.935128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.935282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.935347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.935520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.935563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.935710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.935761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.935985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.936044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.936235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.936290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.936429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.936481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.936656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.936700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.936870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.936904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.937086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.937157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.937304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.937358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.937528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.937561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.937759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.937793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.937955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.938021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.938203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.938264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.938421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.938484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.938633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.938665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.938849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.938915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.939066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.939127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.939302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.939354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.939550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.939586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.939752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.939790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.939955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.940008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.940178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.940232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.940449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.940498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.940629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.940661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.940836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.940892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.941031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.941092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.941246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.941299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.941481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.941513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.941718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.941770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.941933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.941998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.942157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.942215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.942448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.942481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.942654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.942698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.942901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.942935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.943144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.943197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.943339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.943391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.943613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.943654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.943833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.943897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.187 qpair failed and we were unable to recover it. 00:26:44.187 [2024-07-15 14:05:38.944076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.187 [2024-07-15 14:05:38.944129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.944315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.944370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.944513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.944545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.944684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.944728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.945002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.945035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.945212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.945266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.945404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.945458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.945608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.945641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.945946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.945980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.946155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.946216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.946422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.946479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.946619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.946651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.946801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.946864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.947037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.947100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.947240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.947297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.947495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.947527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.947683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.947716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.947909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.947972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.948097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.948162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.948329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.948379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.948575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.948619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.948778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.948816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.948960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.949014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.949192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.949251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.949401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.949434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.949609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.949642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.949915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.949966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.950153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.950186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.950351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.950408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.950532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.950564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.950829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.950881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.951074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.951107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.951303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.951344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.951490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.951532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.951757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.951798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.188 [2024-07-15 14:05:38.951969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.188 [2024-07-15 14:05:38.952031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.188 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.952187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.952254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.952413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.952472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.952628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.952660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.952817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.952881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.953082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.953130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.953320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.953378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.953529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.953571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.953791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.953835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.953993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.954045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.954288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.954320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.954464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.954520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.954673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.954706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.954903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.954967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.955157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.955210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.955384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.955431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.955624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.955667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.955885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.955944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.956084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.956137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.956394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.956426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.956595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.956628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.956795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.956873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.957037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.957097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.957314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.957363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.957512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.957556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.957723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.957771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.957928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.957987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.958146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.958200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.958433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.958465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.958628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.958661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.958849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.958882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.959012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.959064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.959219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.959283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.959422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.959454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.959668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.959701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.959876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.959928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.960100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.960154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.960316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.960370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.960543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.960585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.960713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.960753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.189 [2024-07-15 14:05:38.960929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.189 [2024-07-15 14:05:38.960962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.189 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.961170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.961203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.961314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.961346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.961540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.961572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.961687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.961718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.961980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.962043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.962224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.962276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.962472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.962543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.962686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.962729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.962872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.962929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.963146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.963198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.963432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.963465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.963613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.963656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.963856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.963914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.964080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.964136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.964292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.964346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.964508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.964540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.964655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.964687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.964854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.964909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.965079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.965136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.965297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.965358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.965540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.965571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.965687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.965719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.965884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.965916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.966085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.966117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.966300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.966332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.966504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.966546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.966765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.966807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.966956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.967021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.967176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.967237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.967471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.967503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.967702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.967754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.967915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.967976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.968142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.968194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.968445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.968498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.968771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.968805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.969001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.969059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.969251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.969301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.969489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.969520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.969625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.969658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.969837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.969899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.970092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.970148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.190 [2024-07-15 14:05:38.970533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.190 [2024-07-15 14:05:38.970599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.190 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 14:05:38.970752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 14:05:38.970785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 14:05:38.970977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 14:05:38.971042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 14:05:38.971191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 14:05:38.971246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 14:05:38.971411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 14:05:38.971474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 14:05:38.971669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 14:05:38.971711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 14:05:38.971832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 14:05:38.971864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 14:05:38.972083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 14:05:38.972139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 14:05:38.972332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 14:05:38.972388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 14:05:38.972540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 14:05:38.972573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 14:05:38.972760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 14:05:38.972800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 14:05:38.973050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 14:05:38.973103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 14:05:38.973263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 14:05:38.973322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 14:05:38.973477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 14:05:38.973509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.191 [2024-07-15 14:05:38.973646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.191 [2024-07-15 14:05:38.973678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.191 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 14:05:38.973839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 14:05:38.973900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 14:05:38.974078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 14:05:38.974132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 14:05:38.974279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 14:05:38.974334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 14:05:38.974453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 14:05:38.974485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 14:05:38.974678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 14:05:38.974711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 14:05:38.974908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 14:05:38.974941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 14:05:38.975107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 14:05:38.975140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-07-15 14:05:38.975352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-07-15 14:05:38.975404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.975628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.975658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.975775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.975812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.976010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.976073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.976227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.976279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.976401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.976433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.976552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.976585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.976766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.976801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.976924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.976960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.977189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.977223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.977337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.977371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.977521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.977553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.977701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.977733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.977856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.977893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.978014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.978047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.978192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.978223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.978410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.978456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.978633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.978667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.978872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.978921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.979070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.979102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.979254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.979285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.979483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.979516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.979665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.979699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.979869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.979905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.980048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.980080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.980296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.980338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.980524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.980558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.980712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.980753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.980978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.981040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.981225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.981296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.981449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.981511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.981659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.981692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.981916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.981972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.982132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.982195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.982473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.982529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.982767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.982802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.982980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.983053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.983199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.983258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.983425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.983492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.983671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.983706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.983878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-07-15 14:05:38.983948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-07-15 14:05:38.984133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.984190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.984380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.984417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.984579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.984626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.984859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.984903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.985065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.985098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.985272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.985307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.985451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.985484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.985628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.985666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.985858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.985912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.986111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.986162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.986311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.986373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.986588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.986622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.986796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.986879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.987083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.987117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.987309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.987368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.987521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.987562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.987693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.987728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.987918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.987981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.988139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.988197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.988422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.988458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.988607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.988643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.988780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.988825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.989041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.989094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.989298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.989348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.989490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.989528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.989696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.989732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.989915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.989980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.990192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.990248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.990411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.990482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.990618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.990651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.990802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.990871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.991018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.991074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.991211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.991275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.991458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.991490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.991616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.991650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.991837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.991872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.992023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.992057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.992235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.992301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.992430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.992463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-07-15 14:05:38.992616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-07-15 14:05:38.992653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.992818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.992882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.993034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.993091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.993292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.993348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.993469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.993503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.993663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.993697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.993924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.993960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.994144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.994213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.994334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.994370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.994573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.994620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.994773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.994822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.995061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.995117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.995344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.995400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.995661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.995698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.995878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.995914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.996081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.996142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.996317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.996376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.996510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.996547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.996693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.996726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.996917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.996974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.997113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.997146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.997266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.997298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.997429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.997460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.997593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.997624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.997823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.997857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.998031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.998064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.998214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.998246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.998383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.998414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.998547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.998579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.998752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.998793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.998960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.998992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.999157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.999189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.999358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.999391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.999521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.999552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.999672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.999704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:38.999845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:38.999879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:39.000065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:39.000098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:39.000267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:39.000299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:39.000413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:39.000445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-07-15 14:05:39.000659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-07-15 14:05:39.000702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.000966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.001000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.001162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.001218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.001341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.001402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.001618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.001661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.001847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.001903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.002058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.002111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.002294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.002344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.002484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.002515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.002693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.002726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.002917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.002972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.003100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.003158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.003380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.003431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.003562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.003594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.003733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.003780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.003898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.003931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.004125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.004156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.004327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.004393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.004692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.004724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.004980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.005033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.005178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.005229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.005402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.005462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.005662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.005694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.006005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.006080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.006273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.006326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.006497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.006555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.006704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.006735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.006908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.006962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.007159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.007191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.007351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.007412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.007571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.007608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.007724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.007776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.007919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.007950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.008158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.008202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.008351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.008383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.008514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.008545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.008668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.008699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.008860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.008894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.009095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.009127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-07-15 14:05:39.009267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-07-15 14:05:39.009298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.009494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.009525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.009688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.009721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.009883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.009916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.010057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.010088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.010213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.010255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.010516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.010549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.010718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.010760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.010907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.010940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.011084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.011115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.011280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.011313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.011482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.011514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.011657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.011689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.011814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.011847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.011987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.012020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.012160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.012192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.012332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.012363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.012473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.012506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.012654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.012686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.012836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.012869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.013087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.013120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.013235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.013268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.013407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.013439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.013617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.013650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.013816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.013849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.013996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.014029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.014206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.014239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.014354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.014386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.014529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.014562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.014760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.014794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.014923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.014983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.015161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.015218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.015362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.015424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.015581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.015614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-07-15 14:05:39.015802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-07-15 14:05:39.015863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.016047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.016100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.016251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.016304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.016442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.016474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.016617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.016649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.016893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.016944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.017098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.017151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.017345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.017398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.017539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.017571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.017713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.017753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.017876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.017939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.018080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.018155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.018350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.018402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.018571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.018603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.018744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.018776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.018992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.019043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.019228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.019281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.019423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.019455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.019588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.019619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.019810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.019844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.020018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.020051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.020193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.020225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.020367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.020398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.020635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.020667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.020809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.020869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.021059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.021112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.021296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.021349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.021494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.021526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.021678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.021719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.021911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.021966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.022187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.022240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3861791 Killed "${NVMF_APP[@]}" "$@" 00:26:44.504 [2024-07-15 14:05:39.022377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.022431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.022584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.022616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.022765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.022799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.022968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:44.504 [2024-07-15 14:05:39.023021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.023245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:44.504 [2024-07-15 14:05:39.023299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.023476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:44.504 [2024-07-15 14:05:39.023508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.023652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:44.504 [2024-07-15 14:05:39.023684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-07-15 14:05:39.023834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-07-15 14:05:39.023889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.504 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.024057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.024124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.024297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.024350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.024535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.024568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.024686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.024717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.024853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.024886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.025308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.025358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.025523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.025549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.025676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.025702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.025817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.025843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.025990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.026021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.026122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.026147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.026361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.026388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.026545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.026570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.026746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.026788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.026928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.026979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.027213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.027239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.027409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.027457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.027553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.027578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.027751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.027778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.027885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.027911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3862341 00:26:44.505 [2024-07-15 14:05:39.028081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.028129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:44.505 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3862341 00:26:44.505 [2024-07-15 14:05:39.028267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.028314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.028450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.028495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3862341 ']' 00:26:44.505 [2024-07-15 14:05:39.028618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.505 [2024-07-15 14:05:39.028643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:44.505 [2024-07-15 14:05:39.028779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.028805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.505 [2024-07-15 14:05:39.028922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.028949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:44.505 [2024-07-15 14:05:39.029072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.029097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:44.505 [2024-07-15 14:05:39.029262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.029287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.029392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.029417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.029545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.029570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.029703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.029749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.029854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.029888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.030046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.030074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.030212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.030238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.030371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.030394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.030521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-07-15 14:05:39.030547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-07-15 14:05:39.030675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.030700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.030846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.030873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.030985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.031013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.031140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.031166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.031351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.031388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.031524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.031549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.031689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.031715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.031867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.031894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.032038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.032064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.032225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.032251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.032402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.032428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.032571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.032597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.032732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.032766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.032880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.032908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.033008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.033035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.033164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.033191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.033400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.033427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.033573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.033599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.033764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.033791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.033899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.033931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.034086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.034118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.034236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.034263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.034400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.034427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.034574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.034610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.034735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.034769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.034889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.034933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.035063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.035088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.035216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.035242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.035376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.035402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.035495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.035521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.035681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.035707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.035851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.035878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.036011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.036035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.036204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.036231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.036358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.036382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.037693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.037728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.037859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.037886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.038044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.038084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.038247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.038279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.038406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.038432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-07-15 14:05:39.038544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-07-15 14:05:39.038569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.038673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.038698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.038855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.038883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.039001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.039026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.039194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.039221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.039383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.039409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.039586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.039623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.039752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.039778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.039881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.039906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.040038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.040065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.040271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.040300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.040487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.040513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.040669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.040694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.040823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.040849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.040980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.041025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.041219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.041268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.041435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.041473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.041629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.041659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.041851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.041895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.042007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.042035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.042231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.042272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.042390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.042415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.042673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.042699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.042861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.042906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.043080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.043124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.043281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.043325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.043483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.043509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.043675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.043702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.043858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.043885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.044034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.044060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.044187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.044215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.044367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.044394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.044579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.044605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.044757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.044784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.044907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.044950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-07-15 14:05:39.045117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-07-15 14:05:39.045175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.045344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.045386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.045584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.045609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.045788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.045815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.045940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.045982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.046173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.046216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.046389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.046415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.046513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.046538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.046751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.046777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.046912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.046955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.047145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.047173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.047354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.047396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.047536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.047562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.047747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.047774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.047943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.047985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.048099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.048139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.048359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.048402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.048559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.048585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.048827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.048854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.048959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.048983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.049217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.049246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.049381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.049407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.049525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.049558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.049685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.049710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.049857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.049883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.050018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.050043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.050313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.050350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.050519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.050549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.050723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.050770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.050904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.050931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.051051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.051077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.051200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.051225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.051353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.051380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.051485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.051512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.051644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.051671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.051856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.051883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.051996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.052023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.052217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.052244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.052403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.052429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.052606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.052633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-07-15 14:05:39.052795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-07-15 14:05:39.052822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.052959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.052986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.053188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.053215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.053363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.053390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.053577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.053603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.053775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.053802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.053911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.053936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.054048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.054073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.054197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.054234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.054368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.054394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.054550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.054577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.054801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.054828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.055053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.055079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.055327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.055357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.055466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.055491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.055779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.055806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.055930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.055955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.056204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.056231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.056362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.056387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.056509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.056535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.056719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.056755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.056877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.056903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.057030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.057057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.057189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.057215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.057364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.057390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.057517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.057543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.057729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.057760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.057888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.057922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.058080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.058107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.058321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.058347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.058492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.058519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.058703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.058729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.058840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.058865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.059051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.059077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.059238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.059264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.059443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.059471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.059637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.059663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.059876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.059903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.060059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.060085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.060265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.060291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.060450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-07-15 14:05:39.060476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-07-15 14:05:39.060604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.060635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.060813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.060840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.061068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.061094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.061231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.061255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.061417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.061444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.061575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.061600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.061745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.061770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.061874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.061899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.062112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.062138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.062306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.062343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.062479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.062505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.062674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.062701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.062835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.062861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.063024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.063050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.063201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.063228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.063438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.063464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.063574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.063599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.063765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.063792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.063905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.063929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.064158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.064192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.064328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.064361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.064526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.064552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.064718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.064752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.064960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.064986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.065161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.065199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.065313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.065339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.065444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.065474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.065706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.065761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.065965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.065991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.066102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.066127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.066260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.066296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.066508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.066534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.066686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.066713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.066854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.066880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.067011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.067037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.067190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.067216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.067353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.067379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.067505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.067531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.067697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.067723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.067939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-07-15 14:05:39.067966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-07-15 14:05:39.068178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.068204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.068343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.068369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.068537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.068566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.068727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.068760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.068873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.068897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.069036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.069062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.069240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.069266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.069427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.069454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.069579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.069606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.069817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.069846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.069988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.070014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.070202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.070228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.070376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.070402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.070647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.070674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.070821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.070847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.071034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.071066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.071237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.071263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.071465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.071491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.071647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.071673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.071855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.071882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.072046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.072072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.072299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.072336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.072520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.072558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.072810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.072849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.072967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.072992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.073177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.073204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.073402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.073432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.073561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.073585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.073765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.073792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.073942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.073969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.074118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.074144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.074297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.074322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.074471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.074497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.074651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.074678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.074874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.074901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.075084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.075110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.075271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.075307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.075447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.075472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.075652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-07-15 14:05:39.075678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-07-15 14:05:39.075810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.075836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.075997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.076023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.076142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.076167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.076392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.076427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.076576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.076602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.076773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.076800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.076941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.076967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.077136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.077176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.077340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.077366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 [2024-07-15 14:05:39.077354] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.077416] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:44.512 [2024-07-15 14:05:39.077493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.077518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.077629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.077654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.077760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.077785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.077909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.077943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.078114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.078141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.078272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.078298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.078417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.078443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.078603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.078629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.078758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.078784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.078891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.078917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.079013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.079049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.079212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.079238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.079365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.079392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.079570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.079595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.079801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.079828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.079942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.079968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.080123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.080149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.080311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.080337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.080471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.080497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.080623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.080649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.080878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.080904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.081057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.081083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.081201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.081228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.081396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.081422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.081542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.081568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.081664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.081690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.081819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.081846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.081979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.082005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.082137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.082163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.082283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.082309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.082466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.082496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.082651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.082677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.082805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.082832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.082989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-07-15 14:05:39.083015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-07-15 14:05:39.083143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.083169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.083339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.083364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.083552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.083578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.083680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.083706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.083809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.083836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.083987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.084013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.084239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.084265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.084392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.084419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.084579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.084605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.084833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.084860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.084966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.084993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.085119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.085145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.085244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.085270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.085397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.085424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.085526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.085552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.085694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.085720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.085865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.085891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.086038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.086063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.086262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.086289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.086415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.086442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.086598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.086623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.086825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.086853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.087004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.087031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.087208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.087234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.087386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.087412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.087564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.087591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.087777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.087803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.087963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.087989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.088167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.088193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.088346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.088372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.088503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.088529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.088657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.088683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.088853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.088879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.089041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.089068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.089176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.089202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.089337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.089363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.089548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.089578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.089773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.089800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.089902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.089928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.090121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.090148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.090251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.090277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.090427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.090453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.090641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.090668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.090806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-07-15 14:05:39.090833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-07-15 14:05:39.090932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.090958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.091183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.091209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.091384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.091410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.091541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.091567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.091724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.091755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.091884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.091910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.092077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.092104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.092256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.092282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.092468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.092494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.092624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.092649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.092818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.092845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.092974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.093000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.093211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.093237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.093356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.093382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.093479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.093505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.093659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.093685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.093780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.093806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.093936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.093963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.094089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.094115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.094248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.094274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.094436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.094462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.094670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.094696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.094843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.094869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.094970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.094996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.095121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.095147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.095350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.095377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.095492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.095517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.095644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.095670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.095800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.095827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.095963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.095988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.096081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.096107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.096232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.096258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.096412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.096442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.096559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.096585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.096720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.096751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.096962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.096988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.097116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.097142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.097250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.097276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.097402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.097428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.097528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.097554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.097646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.097672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.097803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.097829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.097958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.097984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.098210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.098236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-07-15 14:05:39.098375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-07-15 14:05:39.098401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.098514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.098540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.098700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.098726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.098892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.098918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.099044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.099070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.099229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.099255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.099407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.099433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.099539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.099565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.099702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.099728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.099925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.099951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.100102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.100129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.100312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.100338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.100443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.100469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.100640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.100666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.100795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.100822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.100964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.100990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.101120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.101146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.101322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.101349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.101504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.101529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.101709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.101743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.101878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.101905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.102039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.102065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.102221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.102247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.102480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.102506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.102645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.102671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.102870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.102896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.103042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.103071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.103207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.103233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.103340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.103370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.103530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.103556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.103708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.103734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.103897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.103923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.104079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.104105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.104235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.104261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.104413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.104439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.104661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.104687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.104836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.104862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.105031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.105056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.105192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.105219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.105356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.105383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.105507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.105533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.105675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-07-15 14:05:39.105701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-07-15 14:05:39.105839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.105866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.105997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.106022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.106149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.106175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.106388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.106414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.106546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.106572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.106703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.106729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.106867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.106893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.107095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.107121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.107272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.107298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.107410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.107447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.107665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.107692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.107819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.107845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.107971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.107997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.108207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.108237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.108383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.108411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.108582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.108608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.108761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.108788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.108891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.108917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.109036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.109062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.109167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.109193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.109292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.109318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.109425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.109451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.109670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.109696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.109836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.109862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.110014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.110040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.110165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.110191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.110317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.110344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.110469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.110495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.110652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.110678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.110790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.110817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.110974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.111000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.111167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.111193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.111365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.111393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.111520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.111546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.111681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.111706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.111858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.111884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.112006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.112033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.112158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.112183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.112325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.112350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.112549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.112584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.112752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.112779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.112933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.112958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.113140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.113165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.113363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.113389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.113575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.113600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.113764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.113790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.113903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.113929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-07-15 14:05:39.114072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-07-15 14:05:39.114097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.114224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.114248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.114370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.114394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.114548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.114574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.114702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.114726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.114884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.114910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.115036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.115066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.115200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.115235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.115445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.115471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.517 [2024-07-15 14:05:39.115673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.115698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.115850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.115877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.116047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.116072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.116239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.116268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.116401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.116426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.116553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.116578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.116711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.116757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.116907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.116933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.117080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.117106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.117262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.117287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.117514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.117542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.117676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.117701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.117912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.117938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.118029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.118054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.118204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.118229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.118383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.118409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.118530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.118555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.118660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.118685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.118886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.118913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.119121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.119147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.119321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.119362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.119513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.119538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.119630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.119655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.119810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.119836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.119966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.119991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.120162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.120187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.120296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.120320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.120476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.120501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.120624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.120650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.120761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.120787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.120916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.120942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.121049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.121073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.121199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.121224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.121353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.121377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.121494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.121519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.121677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.121703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.121836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.121863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.121990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.122016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.517 qpair failed and we were unable to recover it. 00:26:44.517 [2024-07-15 14:05:39.122138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.517 [2024-07-15 14:05:39.122162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.122283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.122308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.122404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.122428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.122600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.122625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.122762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.122788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.123006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.123045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.123204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.123227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.123380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.123419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.123598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.123621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.123771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.123797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.123988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.124040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.124196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.124220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.124387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.124414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.124583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.124606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.124805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.124831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.124998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.125040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.125246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.125270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.125386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.125409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.125544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.125568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.125688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.125713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.125934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.125973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.126133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.126156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.126277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.126301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.126429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.126454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.126582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.126607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.126753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.126792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.126986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.127011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.127174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.127197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.127417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.127448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.127592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.127615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.127770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.127796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.127999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.128037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.128141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.128179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.128299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.128323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.128479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.128502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.128608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.128632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.128792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.128817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.129042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.129068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.129183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.129206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.129385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.129409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.129548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.129571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.129764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.129791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.129910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.129936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.130111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.518 [2024-07-15 14:05:39.130134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.518 qpair failed and we were unable to recover it. 00:26:44.518 [2024-07-15 14:05:39.130305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.130328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.130491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.130514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.130633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.130657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.130829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.130855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.131008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.131033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.131243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.131266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.131432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.131455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.131617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.131640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.131795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.131825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.131923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.131948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.132059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.132083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.132244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.132268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.132438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.132462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.132635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.132671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.132800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.132826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.133022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.133048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.133214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.133239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.133376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.133400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.133587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.133610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.133801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.133826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.133954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.133978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.134133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.134170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.134289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.134313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.134459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.134483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.134641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.134665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.134771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.134797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.134963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.135004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.135134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.135171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.135355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.135379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.135551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.135575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.135680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.135703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.135822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.135849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.135949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.135974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.136067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.136108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.136243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.136267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.136408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.519 [2024-07-15 14:05:39.136431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.519 qpair failed and we were unable to recover it. 00:26:44.519 [2024-07-15 14:05:39.136605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.136629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.136802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.136828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.137014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.137038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.137236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.137259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.137367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.137390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.137548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.137571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.137719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.137763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.137926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.137951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.138105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.138129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.138251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.138274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.138431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.138456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.138629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.138653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.138812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.138840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.139071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.139093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.139243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.139266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.139395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.139419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.139588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.139625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.139751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.139776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.139900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.139926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.140077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.140100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.140288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.140311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.140449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.140473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.140640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.140678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.140820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.140845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.141074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.141098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.141275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.141298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.141476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.141500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.141718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.141747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.141859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.141883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.142069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.142108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.142307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.142330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.142479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.142503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.142624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.142647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.142808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.142835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.143061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.143086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.143219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.143242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.143391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.143415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.143567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.143606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.143740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.143765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.143938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.143977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.144144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.144169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.144348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.144372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.144500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.144548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.144762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.144802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.144938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.144964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.520 [2024-07-15 14:05:39.145198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.520 [2024-07-15 14:05:39.145221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.520 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.145335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.145359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.145500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.145525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.145769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.145801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.145933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.145957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.146123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.146147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.146293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.146342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.146579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.146605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.146785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.146810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.147027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.147052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.147237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.147261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.147400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.147424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.147539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.147563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.147687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.147711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.147834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.147860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.148001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.148025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.148151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.148202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.148376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.148399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.148557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.148605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.148764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.148789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.148992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.149017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.149137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.149161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.149302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.149326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.149484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.149509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.149646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.149684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.149877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.149903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.150099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.150122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.150291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.150315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.150509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.150532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.150708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.150731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.150847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.150871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.151005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.151043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.151170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.151185] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:44.521 [2024-07-15 14:05:39.151193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.151373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.151397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.151572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.151622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.151806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.151835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.152002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.152044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.152194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.152218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.152383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.152422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.152571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.152595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.152776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.152802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.153014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.153039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.153181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.153220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.153388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.153412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.153565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.153604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.153790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.153829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.153959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.153983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.154120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.154162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.521 qpair failed and we were unable to recover it. 00:26:44.521 [2024-07-15 14:05:39.154287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.521 [2024-07-15 14:05:39.154311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.154441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.154465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.154648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.154672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.154828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.154853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.155077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.155101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.155237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.155260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.155400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.155425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.155560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.155590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.155762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.155789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.155914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.155940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.156049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.156074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.156220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.156258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.156428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.156451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.156595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.156635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.156873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.156898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.157026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.157050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.157179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.157214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.157399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.157422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.157587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.157626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.157806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.157846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.158051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.158075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.158220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.158259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.158497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.158521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.158634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.158657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.158861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.158901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.159027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.159066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.159274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.159297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.159448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.159471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.159647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.159670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.159857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.159882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.159984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.160011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.160156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.160180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.160348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.160373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.160520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.160544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.160801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.160827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.160989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.161028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.161232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.161255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.161405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.161428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.161573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.161613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.161752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.161780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.161913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.161937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.162064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.162090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.162252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.162275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.162446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.162469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.162637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.162662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.162829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.162855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.163065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.163090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.163202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.163225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.163381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.163406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.163604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.163628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.163775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.522 [2024-07-15 14:05:39.163800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.522 qpair failed and we were unable to recover it. 00:26:44.522 [2024-07-15 14:05:39.163935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.163959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.164077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.164100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.164257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.164281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.164443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.164467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.164623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.164648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.164837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.164879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.165021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.165045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.165188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.165213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.165406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.165446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.165547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.165570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.165764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.165789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.165955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.165987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.166132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.166170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.166291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.166315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.166551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.166575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.166733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.166777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.166957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.166982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.167135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.167171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.167349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.167373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.167498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.167536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.167684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.167709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.167861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.167901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.168035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.168060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.168295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.168319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.168500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.168524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.168697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.168720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.168939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.168964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.169105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.169144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.169293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.169322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.169446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.169470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.169609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.169633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.169811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.169837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.169943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.169968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.170140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.170178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.170310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.170334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.170561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.170585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.170771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.170796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.170962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.170986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.171101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.171125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.171265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.171290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.171537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.523 [2024-07-15 14:05:39.171560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.523 qpair failed and we were unable to recover it. 00:26:44.523 [2024-07-15 14:05:39.171714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.171757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.171934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.171958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.172095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.172119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.172265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.172303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.172482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.172506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.172668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.172692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.172922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.172947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.173088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.173111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.173265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.173288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.173524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.173548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.173759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.173784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.173911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.173936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.174109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.174150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.174328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.174351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.174541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.174565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.174732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.174778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.174925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.174949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.175086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.175110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.175265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.175290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.175509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.175544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.175714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.175760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.175926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.175951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.176125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.176148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.176318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.176345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.176587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.176611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.176780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.176807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.177013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.177041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.177222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.177250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.177368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.177407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.177583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.177621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.177795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.177821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.177963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.177988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.178201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.178225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.178360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.178383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.178559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.178596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.178783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.178823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.178986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.179011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.179124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.179149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.179323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.179361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.179459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.179483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.179633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.179658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.179793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.179834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.179980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.180020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.180177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.180201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.180352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.180376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.180511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.180559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.180728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.180760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.180958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.180992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.181134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.181173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.181298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.181322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.524 qpair failed and we were unable to recover it. 00:26:44.524 [2024-07-15 14:05:39.181469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.524 [2024-07-15 14:05:39.181508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.181678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.181702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.181887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.181911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.182060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.182084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.182340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.182364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.182527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.182562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.182717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.182747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.182924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.182947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.183159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.183182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.183310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.183333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.183479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.183504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.183642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.183691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.183848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.183874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.184071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.184095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.184234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.184257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.184359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.184383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.184524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.184549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.184752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.184783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.184912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.184937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.185076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.185115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.185311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.185335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.185519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.185542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.185672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.185709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.185858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.185900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.186079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.186118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.186255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.186278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.186418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.186442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.186568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.186593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.186718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.186764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.186931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.186955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.187139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.187162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.187315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.187338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.187523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.187558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.187717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.187762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.187870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.187896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.188111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.188160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.188307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.188343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.188509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.188532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.188668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.188707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.188888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.188913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.189083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.189125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.189296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.189322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.189489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.189513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.189652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.189691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f9ea0 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.189891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.189916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.190034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.190059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.190233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.190271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.190445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.190483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.190648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.190672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.190830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.190855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.191112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.191136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.191279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.191303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.191479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.525 [2024-07-15 14:05:39.191503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.525 qpair failed and we were unable to recover it. 00:26:44.525 [2024-07-15 14:05:39.191734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.191776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.191914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.191940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.192039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.192080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.192211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.192236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.192409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.192438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.192589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.192629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.192844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.192877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.193004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.193028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.193159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.193198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.193419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.193454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.193636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.193660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.193848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.193873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.193999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.194024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.194165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.194203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.194328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.194353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.194497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.194522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.194640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.194664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.194814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.194840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.195024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.195049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.195211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.195250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.195363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.195387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.195578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.195617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.195768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.195809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.195950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.195990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.196206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.196236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.196422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.196446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.196595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.196619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.196729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.196758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.196923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.196947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.197119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.197143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.197338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.197362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.197502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.197527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.197644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.197669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.197802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.197829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.197944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.197970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.198104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.198129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.198264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.198303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.198537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.198560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.198748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.198774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.198911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.198936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.199179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.199203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.199358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.199382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.199501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.199529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.199648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.199672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.199902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.199934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.200109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.200133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.200311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.200335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.200448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.200487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.200632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.200667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.200885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.526 [2024-07-15 14:05:39.200910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.526 qpair failed and we were unable to recover it. 00:26:44.526 [2024-07-15 14:05:39.201071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.201109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.201218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.201257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.201393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.201417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.201586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.201610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.201785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.201811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.202029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.202053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.202155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.202179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.202285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.202317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.202481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.202506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.202689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.202712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.202892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.202916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.203098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.203122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.203273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.203297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.203493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.203517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.203672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.203695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.203841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.203882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.204097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.204136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.204257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.204281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.204428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.204452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.204685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.204714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.204866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.204891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.205115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.205139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.205257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.205280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.205414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.205438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.205588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.205627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.205767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.205792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.206067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.206092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.206244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.206268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.206471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.206501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.206707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.206730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.206862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.206888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.207041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.207083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.207234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.207258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.207431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.207456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.207751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.207783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.207948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.207973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.208124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.208162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.208332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.208355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.208550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.208573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.208682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.208721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.208865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.208891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.209033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.209058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.209231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.209255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.209463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.209486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.209640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.209664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.209931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.209956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.210131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.210154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.210267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.210291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.210471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.210509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.210657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.210696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.210909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.210936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.211051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.211075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.211260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.211284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.211433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.211457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.211604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.211641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.211891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.211916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.212070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.527 [2024-07-15 14:05:39.212115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.527 qpair failed and we were unable to recover it. 00:26:44.527 [2024-07-15 14:05:39.212278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.212302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.212439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.212479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.212598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.212621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.212753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.212794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.212930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.212955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.213108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.213146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.213275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.213314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.213470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.213509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.213618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.213642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.213838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.213865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.213965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.213991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.214131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.214155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.214313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.214337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.214521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.214560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.214697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.214722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.214866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.214891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.215042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.215068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.215216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.215243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.215363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.215388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.215577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.215601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.215755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.215782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.215962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.215987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.216165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.216188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.216410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.216434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.216615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.216639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.216760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.216786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.216949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.216989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.217168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.217201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.217385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.217408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.217518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.217557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.217685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.217709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.217867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.217892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.218012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.218062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.218166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.218190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.218364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.218389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.218494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.218528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.218695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.218733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.218945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.218979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.219118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.219143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.219297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.219336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.219512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.219560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.219761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.219796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.219985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.220008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.220148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.220171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.220357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.220384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.220549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.220572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.220697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.220736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.220887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.220928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.221030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.221076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.221251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.221274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.221420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.221459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.221699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.221723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.221869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.221894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.222056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.222094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.222321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.222344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.222512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.222535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.222658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.222696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.222850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.222877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.222988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.223014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.223125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.528 [2024-07-15 14:05:39.223150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.528 qpair failed and we were unable to recover it. 00:26:44.528 [2024-07-15 14:05:39.223282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.223306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.223424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.223448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.223569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.223593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.223837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.223879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.223989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.224014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.224204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.224228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.224404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.224428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.224612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.224636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.224812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.224836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.225036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.225060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.225202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.225226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.225449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.225485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.225636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.225660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.225791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.225815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.225963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.225989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.226166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.226190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.226407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.226430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.226548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.226572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.226747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.226772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.226942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.226967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.227157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.227189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.227337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.227360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.227550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.227573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.227789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.227815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.227961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.227989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.228177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.228201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.228379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.228402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.228592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.228616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.228847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.228884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.229063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.229088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.229251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.229274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.229387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.229411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.229551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.229575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.529 [2024-07-15 14:05:39.229702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.529 [2024-07-15 14:05:39.229747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.529 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.229877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.229915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.230042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.230067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.230240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.230264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.230367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.230391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.230545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.230569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.230800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.230827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.230951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.230977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.231091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.231114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.231296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.231320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.231471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.231496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.231648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.231687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.231864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.231889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.232062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.232097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.232320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.232344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.232486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.232510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.232676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.232713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.232916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.232940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.233097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.233122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.233260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.233299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.233492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.233515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.233781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.233822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.233976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.234003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.234202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.234225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.234399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.234423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.234656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.234685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.234843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.234871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.235069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.235111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.235219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.235260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.235459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.235486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.235609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.235635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.235818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.235851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.236012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.236037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.236259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.236285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.236407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.236431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.236577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.236601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.236826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.236864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.237054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.237094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.237221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.237247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.237499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.237526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.237688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.237715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.237947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.237973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.238147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.238173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.238341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.238368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.238579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.238604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.238803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.238832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.238969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.238997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.239185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.239210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.239365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.239392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.239574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.239603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.239756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.239791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.240041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.240067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.240226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.240255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.240409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.240435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.240618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.240654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.240809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.240836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.240956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.240982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.241085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.241113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.241272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.241298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.241542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.241568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.530 [2024-07-15 14:05:39.241731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.530 [2024-07-15 14:05:39.241780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.530 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.241962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.241993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.242178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.242205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.242347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.242372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.242565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.242601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.242731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.242763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.242916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.242940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.243158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.243183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.243334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.243362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.243503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.243544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.243661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.243687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.243930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.243962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.244105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.244130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.244375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.244400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.244556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.244583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.244775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.244803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.244973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.245000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.245165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.245190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.245364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.245414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.245614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.245639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.245759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.245784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.245947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.245973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.246112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.246151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.246304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.246344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.246461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.246487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.246703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.246730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.246907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.246934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.247068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.247110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.247266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.247292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.247476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.247501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.247680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.247705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.247838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.247864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.248074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.248126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.248343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.248380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.248565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.248590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.248748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.248775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.249002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.249048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.249180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.249205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.249336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.249363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.249538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.249592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.249728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.249781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.249970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.250000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.250191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.250220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.250421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.250460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.250622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.250655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.250889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.250928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.251100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.251140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.251318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.251346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.251533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.251558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.251708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.251732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.251924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.251949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.252109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.252137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.252245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.252282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.252497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.252521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.252663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.252687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.252819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.252844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.253053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.253086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.253292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.253315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.253470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.253494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.253684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.253719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.253956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.253980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-07-15 14:05:39.254166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-07-15 14:05:39.254190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.254429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.254453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.254678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.254701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.254848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.254874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.255037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.255073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.255236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.255272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.255451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.255475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.255692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.255716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.255939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.255964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.256201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.256226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.256397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.256421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.256537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.256577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.256814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.256841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.257048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.257081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.257212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.257236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.257375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.257398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.257642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.257666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.257811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.257853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.258039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.258073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.258231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.258264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.258494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.258544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.258747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.258781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.258962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.258988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.259158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.259190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.259426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.259453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.259647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.259671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.259822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.259853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.260002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.260040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.260269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.260293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.260473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.260497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.260650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.260677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.260878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.260915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.261118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.261157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.261354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.261378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.261546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.261580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.261699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.261765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.261999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.262024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.262212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.262236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.262413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.262452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.262629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.262653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.262832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.262857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.263101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.263137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.263293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.263316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.263538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.263572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.263697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.263720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-07-15 14:05:39.263942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-07-15 14:05:39.263967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.264148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.264172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.264373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.264397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.264558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.264580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.264820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.264856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.265015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.265054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.265228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.265251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.265464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.265487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.265684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.265707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.265895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.265931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.266117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.266141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.266327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.266351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.266509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.266533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.266765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.266791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.266941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.266965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.267107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.267153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.267375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.267399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.267555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.267578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.267819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.267854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.268002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.268025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.268227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.268261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.268500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.268527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.268705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.268762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.268897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.268919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.269047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.269071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.269240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.269291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.269494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.269517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.269742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.269777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.269936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.269960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.270153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.270192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.270380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.270404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.270558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.270581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.270747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.270772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.270906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.270932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.271172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.271195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.271363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.271387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.271688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.271712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.271843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.271876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.272058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.272097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.272290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.272313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.272495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.272518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.272755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.272780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.272892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.272916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.273098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.273136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.273302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.273325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.273512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-07-15 14:05:39.273534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-07-15 14:05:39.273647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.273684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.273825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.273848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.273961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.273985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.274170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.274208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.274356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.274389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.274579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.274602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.274762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.274786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.274981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.275008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.275242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.275264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.275437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.275461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.275612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.275635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.275841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.275878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.276040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.276065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.276278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.276302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.276451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.276490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.276743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.276769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.276914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.276939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.277161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.277184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.277329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.277353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.277490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.277533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.277669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.277707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.277950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.277975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.278135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.278159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.278376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.278399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.278537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.278560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.278711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.278757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.278994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.279036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.279201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.279224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.279453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.279477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.279653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.279676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.279905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.279931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.280086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.280109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.280247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.280270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.280523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.280547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.280731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.280762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.280903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.280926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.281037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.281062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.281216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.281254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.281364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.281388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.281606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.281644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.281820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.281846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.282024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.282065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.282171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.282209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.282343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.282368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.282548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.282574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.282707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.282732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.282839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.282866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.282978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-07-15 14:05:39.283004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-07-15 14:05:39.283151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.283177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.283315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.283341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.283496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.283522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.283626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.283651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.283802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.283828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.284064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.284090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.284203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.284227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.284425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.284454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.284617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.284642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.284791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.284819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.284943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.284969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.285072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.285101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.285252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.285278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.285505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.285530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.285710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.285735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.285874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.285900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.286041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.286082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.286277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.286313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.286474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.286499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.286679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.286708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.286890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.286915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.287039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.287065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.287169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.287194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.287342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.287368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.287595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.287627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.287757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.287783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.287924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.287949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.288105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.288146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.288291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.288316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.288528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.288563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.288791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.288818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.288923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.288948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.289153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.289180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.289362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.289387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.289593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.289618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.289801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.289827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.289956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.289982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.290117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.290143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.290289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.290331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.290571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.290597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.290747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.290788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.290896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.290921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.291046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.291071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.291255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.291282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.291409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.291434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.291559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.291585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-07-15 14:05:39.291716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-07-15 14:05:39.291749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.291891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.291917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.292080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.292105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.292280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.292307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.292435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.292460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.292583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.292613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.292750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.292777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.292949] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:44.536 [2024-07-15 14:05:39.292973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.292984] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:44.536 [2024-07-15 14:05:39.292998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b9[2024-07-15 14:05:39.293000] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the0 with addr=10.0.0.2, port=4420 00:26:44.536 only 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.293016] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:44.536 [2024-07-15 14:05:39.293028] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:44.536 [2024-07-15 14:05:39.293153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.293112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:26:44.536 [2024-07-15 14:05:39.293177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.293168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:26:44.536 [2024-07-15 14:05:39.293219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:26:44.536 [2024-07-15 14:05:39.293222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:44.536 [2024-07-15 14:05:39.293416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.293470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.293575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.293600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.293720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.293750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.293850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.293876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.294033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.294060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.294217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.294243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.294399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.294425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.294627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.294653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.294808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.294836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.295065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.295102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.295243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.295269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.295486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.295523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.295685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.295719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.295865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.295892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.296085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.296112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.296280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.296306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.296435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.296461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.296617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.296643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.296863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.296901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.297061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.297086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.297192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.297219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.297401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.297428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.297556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.297582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.297729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.297772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.297960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.297987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.298165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.298192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.298343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.298370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.298555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.298592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.298817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.298855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.298992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.299019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.299172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.299199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.299357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.299384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-07-15 14:05:39.299516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-07-15 14:05:39.299542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-07-15 14:05:39.299675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-07-15 14:05:39.299705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-07-15 14:05:39.299908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-07-15 14:05:39.299936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-07-15 14:05:39.300091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-07-15 14:05:39.300118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-07-15 14:05:39.300331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-07-15 14:05:39.300358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-07-15 14:05:39.300589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-07-15 14:05:39.300615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-07-15 14:05:39.300768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-07-15 14:05:39.300794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-07-15 14:05:39.300964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-07-15 14:05:39.300990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-07-15 14:05:39.301116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-07-15 14:05:39.301142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-07-15 14:05:39.301327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-07-15 14:05:39.301353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-07-15 14:05:39.301550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-07-15 14:05:39.301577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-07-15 14:05:39.301682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-07-15 14:05:39.301708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-07-15 14:05:39.301827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-07-15 14:05:39.301853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-07-15 14:05:39.301990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-07-15 14:05:39.302016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-07-15 14:05:39.302137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-07-15 14:05:39.302163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-07-15 14:05:39.302316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-07-15 14:05:39.302343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-07-15 14:05:39.302475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-07-15 14:05:39.302501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-07-15 14:05:39.302625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-07-15 14:05:39.302651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-07-15 14:05:39.302805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-07-15 14:05:39.302832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-07-15 14:05:39.302938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-07-15 14:05:39.302964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-07-15 14:05:39.303095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-07-15 14:05:39.303121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-07-15 14:05:39.303276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-07-15 14:05:39.303302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-07-15 14:05:39.303456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-07-15 14:05:39.303482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-07-15 14:05:39.303668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-07-15 14:05:39.303694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-07-15 14:05:39.303831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-07-15 14:05:39.303858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-07-15 14:05:39.303967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-07-15 14:05:39.303993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-07-15 14:05:39.304117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.304144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.304313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.304339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.304472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.304498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.304724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.304756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.304912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.304938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.305044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.305078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.305291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.305317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.305450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.305476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.305607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.305633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.305754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.305781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.305886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.305912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.306057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.306083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.306185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.306211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.306383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.306408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.306537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.306563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.306690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.306720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.306835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.306861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.306988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.307014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.307112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.307145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.307311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.307337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.307559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.307585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.307700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.307726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.307863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.307889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.308060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.308085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.308220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.308246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.308373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.308398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.308553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.308579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.308733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.308767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.308897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.308923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.309035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.309061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-07-15 14:05:39.309237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-07-15 14:05:39.309274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.309408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.309434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.309587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.309612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.309801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.309839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.309980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.310006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.310120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.310145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.310325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.310351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.310474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.310499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.310602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.310628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.310760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.310787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.310953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.310979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.311083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.311109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.311216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.311253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.311410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.311436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.311540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.311566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.311701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.311726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.311827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.311853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.311980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.312006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.312133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.312158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.312297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.312324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.312453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.312478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.312629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.312656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.312783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.312809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.312988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.313014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.313145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.313171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.313323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.313353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.313457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.313482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.313607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.313634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.313770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.313796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.313920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.313946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.314076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-07-15 14:05:39.314101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-07-15 14:05:39.314221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.314246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.314398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.314425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.314663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.314689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.314848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.314873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.314993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.315018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.315134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.315160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.315285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.315310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.315429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.315454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.315559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.315585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.315734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.315765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.315927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.315954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.316063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.316098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.316225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.316250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.316380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.316406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.316565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.316590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.316689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.316716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.316955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.317000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.317139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.317167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.317358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.317385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.317484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.317511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.317637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.317663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.317873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.317911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.318076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.318102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.318222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.318248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.318382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.318408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.318509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.318535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.318735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.318777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.318909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.318935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.319051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.319076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.319181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-07-15 14:05:39.319207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-07-15 14:05:39.319351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.319377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.319504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.319530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.319663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.319689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.319894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.319926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.320066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.320096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.320284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.320310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.320472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.320498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.320628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.320654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.320836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.320863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.321071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.321097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.321216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.321242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.321383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.321409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.321566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.321592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.321760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.321788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.321917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.321943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.322039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.322065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.322191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.322218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.322351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.322378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.322510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.322536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.322664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.322690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.322905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.322933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.323078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.323104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.323233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.323260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.323440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.323466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.323566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.323593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.323824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.323851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.324008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.324034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.324177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.324204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.324379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.324405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.324562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.324588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-07-15 14:05:39.324716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-07-15 14:05:39.324748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.542 [2024-07-15 14:05:39.324894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-07-15 14:05:39.324921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-07-15 14:05:39.325083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-07-15 14:05:39.325109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-07-15 14:05:39.325235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-07-15 14:05:39.325262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-07-15 14:05:39.325384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-07-15 14:05:39.325410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-07-15 14:05:39.325512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-07-15 14:05:39.325537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-07-15 14:05:39.325698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-07-15 14:05:39.325723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-07-15 14:05:39.325842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-07-15 14:05:39.325868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-07-15 14:05:39.325982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-07-15 14:05:39.326008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 14:05:39.326136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 14:05:39.326163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 14:05:39.326293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 14:05:39.326318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 14:05:39.326462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 14:05:39.326488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 14:05:39.326590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 14:05:39.326616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 14:05:39.326724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 14:05:39.326757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 14:05:39.326862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 14:05:39.326894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 14:05:39.326995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 14:05:39.327021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.820 qpair failed and we were unable to recover it. 00:26:44.820 [2024-07-15 14:05:39.327148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.820 [2024-07-15 14:05:39.327174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.327271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.327296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.327386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.327412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.327541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.327566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.327696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.327723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.327846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.327872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.327997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.328022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.328160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.328186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.328310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.328337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.328471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.328498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.328602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.328628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.328750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.328777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.328912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.328938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.329064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.329089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.329188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.329213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.329318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.329344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.329474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.329500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.329627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.329652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.329765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.329792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.329948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.329974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.330079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.330115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.330270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.330296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.330428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.330453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.330581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.330607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.330729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.330761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.330874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.330900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.331039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.331065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.331192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.331218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.331357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.331393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.331518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.331545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.331700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.331726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.331923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.331950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.332058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.332085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.332215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.332241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.332395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.332422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.332553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.332579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.332707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.332733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.332891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.332918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.333115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.333145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.333292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.333317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.821 qpair failed and we were unable to recover it. 00:26:44.821 [2024-07-15 14:05:39.333434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.821 [2024-07-15 14:05:39.333459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.333656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.333683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.333830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.333857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.334015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.334040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.334218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.334244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.334444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.334471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.334573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.334598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.334721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.334753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.335010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.335037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.335199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.335225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.335442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.335469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.335593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.335618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.335775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.335802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.335907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.335934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.336110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.336135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.336260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.336285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.336410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.336435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.336564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.336590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.336716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.336748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.336845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.336870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.336988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.337025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.337177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.337202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.337375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.337400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.337528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.337554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.337708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.337734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.337934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.337960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.338108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.338133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.338257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.338283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.338381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.338406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.338531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.338558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.338677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.338712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.338956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.338982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.339100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.339127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.339280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.339305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.339412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.339442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.339578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.339604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.339747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.339773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.339889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.339915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.340045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.340074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.340179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.340204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.340356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.822 [2024-07-15 14:05:39.340382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.822 qpair failed and we were unable to recover it. 00:26:44.822 [2024-07-15 14:05:39.340537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.340563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.340670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.340696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.340814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.340840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.340997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.341024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.341148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.341174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.341271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.341297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.341404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.341430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.341528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.341558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.341698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.341725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.341867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.341893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.342028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.342053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.342213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.342240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.342342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.342368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.342607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.342633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.342772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.342799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.342977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.343004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.343150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.343175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.343326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.343351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.343543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.343569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.343697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.343723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.343883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.343909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.344045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.344070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.344224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.344249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.344435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.344466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.344569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.344594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.344719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.344749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.344889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.344916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.345069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.345094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.345248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.345274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.345414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.345441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.345573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.345598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.345757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.345784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.345911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.345938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.346046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.346072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.346219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.346245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.346420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.346447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.346652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.346677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.346807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.346838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.346967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.346994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.347122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.347147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.823 qpair failed and we were unable to recover it. 00:26:44.823 [2024-07-15 14:05:39.347274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.823 [2024-07-15 14:05:39.347300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.347435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.347460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.347584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.347609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.347724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.347757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.347887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.347913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.348012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.348037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.348326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.348352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.348477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.348503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.348656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.348681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.348799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.348836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.348964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.348989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.349149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.349174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.349301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.349328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.349448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.349473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.349629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.349654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.349809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.349836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.349950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.349977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.350132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.350157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.350259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.350284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.350426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.350452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.350705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.350744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.350855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.350880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.351063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.351089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.351242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.351268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.351398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.351423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.351577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.351602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.351720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.351752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.351907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.351932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.352045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.352080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.352248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.352274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.352427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.352453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.352651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.352676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.352840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.352867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.353063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.353089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.353198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.353223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.353322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.353345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.353452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.353478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.353626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.353656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.353797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.353823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.354000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.354026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.824 [2024-07-15 14:05:39.354187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.824 [2024-07-15 14:05:39.354212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.824 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.354383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.354407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.354539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.354564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.354728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.354760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.354930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.354955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.355109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.355146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.355289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.355314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.355423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.355448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.355546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.355571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.355795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.355821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.355958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.355984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.356200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.356237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.356355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.356380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.356553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.356578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.356745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.356781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.356910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.356936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.357059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.357084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.357221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.357247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.357470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.357506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.357667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.357694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.357859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.357886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.358057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.358083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.358198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.358224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.358382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.358408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.358584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.358620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.358795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.358822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.358973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.358999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.359153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.359179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.359314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.359340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.359478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.359504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.359652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.359678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.359832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.359858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.360005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.360031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.825 [2024-07-15 14:05:39.360150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.825 [2024-07-15 14:05:39.360176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.825 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.360303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.360329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.360476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.360502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.360613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.360639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.360745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.360775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.360918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.360955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.361116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.361142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.361309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.361335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.361500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.361526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.361749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.361784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.361923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.361949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.362079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.362105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.362263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.362289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.362530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.362556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.362697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.362734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.362945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.362976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.363078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.363105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.363259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.363285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.363420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.363446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.363765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.363792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.363921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.363947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.364110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.364135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.364359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.364396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.364567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.364594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.364749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.364785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.364949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.364986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.365148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.365174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.365382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.365408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.365558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.365584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.365710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.365736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.365947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.365985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.366084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.366110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.366255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.366281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.366472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.366498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.366647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.366673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.366813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.366840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.366943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.366968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.367094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.367120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.367301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.367327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.367491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.367516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.367757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.367784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.826 qpair failed and we were unable to recover it. 00:26:44.826 [2024-07-15 14:05:39.367923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.826 [2024-07-15 14:05:39.367958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.368085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.368112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.368244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.368270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.368384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.368419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.368531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.368566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.368696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.368722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.368881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.368919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.369081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.369107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.369269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.369295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.369467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.369492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.369668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.369704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.369875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.369901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.370073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.370098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.370251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.370277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.370407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.370432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.370583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.370608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.370735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.370767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.370873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.370899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.371001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.371027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.371151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.371177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.371324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.371351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.371464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.371490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.371652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.371677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.371865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.371892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.372021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.372047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.372245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.372271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.372423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.372448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.372636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.372662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.372799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.372826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.372983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.373008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.373187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.373221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.373363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.373388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.373526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.373551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.373677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.373702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.373843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.373870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.373971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.373997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.374217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.374243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.374370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.374396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.374591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.374617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.374730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.374773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.374917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.374944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.827 qpair failed and we were unable to recover it. 00:26:44.827 [2024-07-15 14:05:39.375043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.827 [2024-07-15 14:05:39.375068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.375228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.375253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.375358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.375396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.375593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.375617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.375790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.375816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.376023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.376049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.376227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.376253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.376413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.376439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.376604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.376629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.376755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.376781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.376918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.376944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.377085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.377110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.377244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.377270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.377435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.377461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.377685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.377720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.377841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.377867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.378004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.378030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.378195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.378221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.378422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.378447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.378567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.378592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.378725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.378756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.378901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.378926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.379089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.379125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.379253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.379279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.379483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.379520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.379678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.379703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.379891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.379917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.380051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.380077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.380215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.380240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.380372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.380401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.380545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.380571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.380706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.380732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.380861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.380887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.381062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.381088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.381242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.381279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.381453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.381479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.381580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.381607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.381792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.381819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.382042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.382078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.382214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.382239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.382362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.828 [2024-07-15 14:05:39.382388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.828 qpair failed and we were unable to recover it. 00:26:44.828 [2024-07-15 14:05:39.382534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.382560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.382756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.382782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.382989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.383026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.383249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.383284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.383424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.383450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.383602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.383628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.383819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.383853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.383967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.383992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.384164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.384190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.384315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.384340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.384453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.384488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.384671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.384698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.384895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.384922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.385069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.385094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.385247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.385273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.385509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.385535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.385692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.385719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.385818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.385844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.385973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.386000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.386108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.386133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.386329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.386378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.386525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.386550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.386752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.386779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.386963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.386989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.387210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.387237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.387398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.387422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.387625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.387650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.387865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.387902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.388075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.388105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.388279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.388315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.388547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.388584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.388734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.388815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.388966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.388993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.389189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.389222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.389418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.389443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.389615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.829 [2024-07-15 14:05:39.389639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.829 qpair failed and we were unable to recover it. 00:26:44.829 [2024-07-15 14:05:39.389780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.389805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.389939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.389965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.390109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.390149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.390337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.390362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.390530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.390556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.390767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.390803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.390932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.390958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.391205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.391231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.391429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.391454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.391582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.391607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.391781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.391807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.391922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.391948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.392132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.392173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.392303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.392328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.392507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.392546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.392696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.392721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.392864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.392890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.393099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.393124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.393322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.393347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.393502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.393527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.393632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.393657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.393787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.393826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.393953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.393978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.394094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.394120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.394233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.394269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.394493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.394518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.394697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.394743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.394880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.394905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.395029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.395054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.395290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.395326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.395522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.395557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.395771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.395798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.395972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.396002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.396149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.396188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.396423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.396449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.396553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.396578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.396727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.396760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.396870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.396903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.397117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.397143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.397343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.397368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.830 [2024-07-15 14:05:39.397563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.830 [2024-07-15 14:05:39.397588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.830 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.397794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.397823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.397951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.397976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.398099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.398125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.398251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.398276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.398437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.398462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.398642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.398679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.398814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.398855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.399164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.399189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.399347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.399372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.399598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.399623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.399796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.399823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.399951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.399976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.400150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.400174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.400387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.400422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.400602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.400628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.400822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.400848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.400991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.401016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.401201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.401226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.401431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.401456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.401622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.401657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.401847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.401874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.402038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.402063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.402255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.402290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.402430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.402455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.402597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.402637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.402778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.402805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.402997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.403023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.403186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.403211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.403453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.403479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.403681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.403706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.403842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.403868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.404051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.404081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.404218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.404255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.404389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.404415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.404564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.404590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.404752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.404780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.404886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.404912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.405047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.405073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.405276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.405314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.405494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.405520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.831 [2024-07-15 14:05:39.405667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.831 [2024-07-15 14:05:39.405703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.831 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.405883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.405921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.406028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.406054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.406234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.406260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.406433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.406459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.406567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.406593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.406726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.406757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.406932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.406969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.407069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.407096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.407234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.407260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.407456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.407482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.407591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.407617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.407730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.407772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.407889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.407915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.408008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.408034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.408180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.408206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.408347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.408373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.408469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.408495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.408644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.408671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.408846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.408873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.409140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.409167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.409282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.409308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.409482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.409507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.409715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.409757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.409898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.409924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.410164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.410200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.410334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.410371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.410607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.410644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.410803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.410830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.411018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.411054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.411274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.411309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.411456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.411486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.411651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.411677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.411853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.411880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.412056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.412082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.412281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.412306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.412415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.412447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.412604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.412630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.412847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.412885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.413048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.413074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.413254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.413280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.832 qpair failed and we were unable to recover it. 00:26:44.832 [2024-07-15 14:05:39.413447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.832 [2024-07-15 14:05:39.413473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.413662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.413688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.413884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.413910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.414060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.414085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.414282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.414308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.414439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.414465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.414567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.414593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.414722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.414754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.414962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.414987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.415113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.415139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.415281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.415308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.415459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.415484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.415619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.415645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.415778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.415804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.415901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.415926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.416028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.416053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.416186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.416211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.416344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.416370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.416544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.416569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.416776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.416802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.416977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.417014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.417140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.417166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.417267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.417292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.417426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.417453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.417582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.417608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.417814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.417841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.418019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.418044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.418210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.418236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.418453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.418488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.418640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.418667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.418847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.418875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.419095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.419130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.419259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.419284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.419488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.419526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.419631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.419656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.833 [2024-07-15 14:05:39.419784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.833 [2024-07-15 14:05:39.419809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.833 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.419970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.419996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.420188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.420214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.420394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.420419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.420639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.420664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.420833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.420860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.420993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.421019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.421173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.421198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.421304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.421330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.421472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.421496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.421661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.421686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.421862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.421900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.422064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.422090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.422299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.422336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.422473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.422499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.422679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.422704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.422920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.422947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.423085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.423110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.423252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.423277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.423413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.423438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.423560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.423586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.423750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.423775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.423997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.424031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.424192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.424217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.424376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.424403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.424565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.424590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.424810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.424848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.425008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.425033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.425245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.425271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.425446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.425472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.425636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.425672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.425833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.425870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.426003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.426029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.426138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.426164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.426285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.426310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.426431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.426461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.426557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.426582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.426733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.426765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.426908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.426935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.427040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-15 14:05:39.427065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.834 qpair failed and we were unable to recover it. 00:26:44.834 [2024-07-15 14:05:39.427227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.427253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.427476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.427512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.427675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.427701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.427901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.427928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.428061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.428086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.428219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.428244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.428390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.428416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.428530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.428556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.428692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.428718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.428902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.428928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.429121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.429146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.429292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.429317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.429453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.429479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.429642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.429667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.429880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.429907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.430042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.430068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.430246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.430272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.430480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.430507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.430684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.430709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.430871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.430897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.431085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.431111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.431306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.431332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.431461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.431487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.431665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.431691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:44.835 [2024-07-15 14:05:39.431851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.431878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:44.835 [2024-07-15 14:05:39.432100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.432136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:44.835 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:44.835 [2024-07-15 14:05:39.432273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.432299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.432437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.432462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.432564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.432600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.432723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.432763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.432924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.432950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.433109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.433136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.433241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-15 14:05:39.433266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.835 qpair failed and we were unable to recover it. 00:26:44.835 [2024-07-15 14:05:39.433395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.433425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.433555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.433580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.433699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.433724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.433910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.433937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.434070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.434095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.434225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.434250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.434379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.434405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.434616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.434642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.434762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.434789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.434896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.434922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.435103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.435132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.435310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.435335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.435463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.435489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.435608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.435633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.435763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.435789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.435919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.435945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.436086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.436112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.436246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.436272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.436403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.436428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.436549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.436574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.436728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.436760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.436858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.436884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.437014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.437040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.437153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.437178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.437335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.437360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.437497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.437523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.437654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.437679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.437790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.437817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.437921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.437947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.438046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.438071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.438168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.438193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.438288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.438313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.438439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.438465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.438604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.438629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.438743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.438771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.438903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-15 14:05:39.438928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.836 qpair failed and we were unable to recover it. 00:26:44.836 [2024-07-15 14:05:39.439054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.439080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.439193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.439220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.439390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.439416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.439540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.439565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.439691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.439721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.439859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.439885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.439979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.440006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.440135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.440160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.440289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.440314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.440422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.440449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.440583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.440609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.440709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.440734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.440848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.440873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.440984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.441009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.441141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.441168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.441294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.441319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.441477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.441503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.441606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.441631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.441776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.441803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.441911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.441937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.442045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.442070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.442169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.442199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.442321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.442346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.442504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.442530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.442632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.442657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.442765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.442792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.442890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.442916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.443043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.443070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.443193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.443218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.443387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.443412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.443629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.443655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.443790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.443818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.443942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.443967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.444109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.444135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.444267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.444294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.444484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.444508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.444639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.444665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.444775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.444802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.444906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.444932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.445069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.445095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.837 qpair failed and we were unable to recover it. 00:26:44.837 [2024-07-15 14:05:39.445219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.837 [2024-07-15 14:05:39.445244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.445389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.445415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.445538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.445564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.445690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.445716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.445852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.445878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.445999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.446026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.446217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.446253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.446391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.446417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.446548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.446574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.446703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.446729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.446845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.446872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.446966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.446992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.447109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.447135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.447234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.447259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.447371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.447397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.447522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.447548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.447677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.447703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.447839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.447866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.447969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.447996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.448128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.448154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.448280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.448306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.448398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.448424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.448552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.448578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.448684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.448710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.448826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.448853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.448954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.448979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.449201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.449239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.449365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.449391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.449527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.449553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.449672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.449698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.449797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.449824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.449926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.449956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.450084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.450110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 [2024-07-15 14:05:39.450216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.450242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.838 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:44.838 [2024-07-15 14:05:39.450415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.838 [2024-07-15 14:05:39.450442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.838 qpair failed and we were unable to recover it. 00:26:44.839 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:44.839 [2024-07-15 14:05:39.450568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.450594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.839 [2024-07-15 14:05:39.450725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:44.839 [2024-07-15 14:05:39.450759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.450867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.450893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.451001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.451027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.451147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.451173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.451296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.451322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.451427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.451453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.451587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.451613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.451720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.451752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.451852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.451878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.451987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.452013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.452246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.452273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.452370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.452396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.452614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.452640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.452790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.452817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.452926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.452952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.453100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.453127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.453298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.453324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.453492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.453518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.453611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.453637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.453814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.453841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.453951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.453977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.454174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.454200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.454355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.454381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.454545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.454571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.454703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.454729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.454836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.454862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.454963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.454989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.455132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.455158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.455391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.455428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.455640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.455666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.455789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.455816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.455921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.455946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.456085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.456111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.456351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.456391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.456538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.456564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.456720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.456752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.456862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.456888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.456990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.839 [2024-07-15 14:05:39.457015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.839 qpair failed and we were unable to recover it. 00:26:44.839 [2024-07-15 14:05:39.457128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.457153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.457259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.457284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.457418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.457443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.457598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.457624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.457749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.457775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.457967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.457993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.458217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.458243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.458424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.458449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.458578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.458604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.458728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.458782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.458888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.458913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.459014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.459040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.459199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.459225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.459418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.459454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.459694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.459720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.459858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.459884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.459987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.460013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.460142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.460167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.460293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.460319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.460534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.460570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.460710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.460753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.460859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.460884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc4000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.461018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.461063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.461179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.461207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.461340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.461367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.461520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.461547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.461679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.461706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.461812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.461839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.461947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.461973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.462105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.462132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.462285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.462312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.462408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.462435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.462537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.462564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.462732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.462774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.462909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.462935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.463038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.463070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.463284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.463322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.463421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.463447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.463556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.463591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.463703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.840 [2024-07-15 14:05:39.463729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.840 qpair failed and we were unable to recover it. 00:26:44.840 [2024-07-15 14:05:39.463847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.463874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.464007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.464033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.464160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.464186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.464390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.464416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.464527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.464554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.464814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.464841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.464978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.465004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.465164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.465191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.465297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.465323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.465434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.465460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.465584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.465610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.465775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.465802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.465906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.465933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.466066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.466092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.466228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.466254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.466425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.466451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.466592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.466618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.466757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.466784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.466903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.466930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.467091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.467117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.467298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.467324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.467438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.467464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.467647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.467674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.467836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.467864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.467974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.468000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.468125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.468151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.468376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.468403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.468557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.468583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.468723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.468759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.468930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.468956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.469097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.469123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.469292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.469328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.469468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.469494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.469670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.469696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.469835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.469862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.469973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.470004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.470151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.470178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.470320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.470350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.470498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.470524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.470703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.470728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.841 qpair failed and we were unable to recover it. 00:26:44.841 [2024-07-15 14:05:39.470873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.841 [2024-07-15 14:05:39.470899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.471068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.471094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.471268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.471294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.471440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.471466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.471609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.471635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.471803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.471830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.471969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.471995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.472105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.472131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.472283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.472319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.472461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.472487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.472603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.472628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.472805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.472833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.473035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.473061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.473183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.473209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.473369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.473407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.473595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.473621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.473744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.473771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.473897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.473923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.474104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.474130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.474275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.474301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.474440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.474465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.474614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.474640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 Malloc0 00:26:44.842 [2024-07-15 14:05:39.474788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.474815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.474990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.475016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.842 [2024-07-15 14:05:39.475189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.475215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:44.842 [2024-07-15 14:05:39.475334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.475361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.842 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:44.842 [2024-07-15 14:05:39.475518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.475544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.475712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.475745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.475867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.475893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.476046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.476072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.476254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.476280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.476433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.476459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.476633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.476659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.476788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.842 [2024-07-15 14:05:39.476815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.842 qpair failed and we were unable to recover it. 00:26:44.842 [2024-07-15 14:05:39.476937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.476964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.477137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.477164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.477277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.477303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.477448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.477474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.477688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.477714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.477868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.477896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.478079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.478105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.478314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.478339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.478465] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:44.843 [2024-07-15 14:05:39.478505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.478531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.478677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.478702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.478822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.478849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.478986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.479012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.479109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.479135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.479292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.479319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.479492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.479518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.479623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.479661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.479796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.479823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.479950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.479976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.480128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.480154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.480288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.480314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.480452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.480489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.480639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.480665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.480921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.480949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.481098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.481124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.481265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.481291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.481518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.481548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.481689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.481729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.481915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.481942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.482110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.482136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.482297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.482324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.482519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.482545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.482759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.482785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.482917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.482944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.483088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.483114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.483232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.483268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.483395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.483421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.483568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.483595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.483742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.483770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.483958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.483984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.484123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.484149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.843 [2024-07-15 14:05:39.484331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.843 [2024-07-15 14:05:39.484358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.843 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.484514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.484540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.484733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.484767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.484960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.484987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.485157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.485194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.485296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.485322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.485489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.485515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.485634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.485660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.485820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.485847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.485980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.486006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.486126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.486153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.486315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.486341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.486482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.486508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.486661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.486687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.844 [2024-07-15 14:05:39.486811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.486838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:44.844 [2024-07-15 14:05:39.486979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.844 [2024-07-15 14:05:39.487006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:44.844 [2024-07-15 14:05:39.487129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.487156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.487278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.487304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.487434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.487460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.487650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.487686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.487828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.487854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.487994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.488032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.488180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.488206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.488376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.488403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.488537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.488568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.488705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.488731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.488925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.488951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.489096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.489122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.489277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.489303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.489472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.489498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.489673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.489699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.489878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.489905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.490055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.490081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.490235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.490261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.490402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.490428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.490573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.490600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.490794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.490821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.490991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.491017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.491143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.491170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.844 [2024-07-15 14:05:39.491311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.844 [2024-07-15 14:05:39.491337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.844 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.491483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.491509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.491654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.491681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.491825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.491852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.491985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.492011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.492187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.492213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.492355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.492381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.492562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.492588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.492707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.492733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.492867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.492894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.493067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.493093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.493248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.493274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.493442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.493469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.493670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.493697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.493823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.493850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.494020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.494046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.494153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.494183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.494343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.494369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.494510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.494537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.494712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.494746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:44.845 [2024-07-15 14:05:39.494921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.494947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.845 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:44.845 [2024-07-15 14:05:39.495117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.495144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.495318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.495344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.495484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.495510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.495664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.495691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.495851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.495878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.496012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.496038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.496212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.496238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.496373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.496399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.496541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.496567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.496716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.496750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.496860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.496886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.497059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.497085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.497234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.497260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.497401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.497427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.497617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.497643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.497811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.497838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.497987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.498018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.498192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.498217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.845 [2024-07-15 14:05:39.498367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.845 [2024-07-15 14:05:39.498391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.845 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.498488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.498513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.498697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.498721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.498857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.498882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.499055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.499080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.499213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.499237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.499377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.499402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.499583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.499608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.499746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.499771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.499945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.499970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.500141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.500166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.500368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.500396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.500563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.500588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.500750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.500786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.500939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.500965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.501094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.501119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.501250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.501276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.501399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.501425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.501606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.501634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.501753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.501780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.501926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.501953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.502116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.502143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.502281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.502308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.502448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.502474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.502632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.502658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.846 [2024-07-15 14:05:39.502845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.502873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:44.846 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.846 [2024-07-15 14:05:39.503010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.503038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:44.846 [2024-07-15 14:05:39.503177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.503204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.503349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.503376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.503521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.503547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.503653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.503680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.503824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.503851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.503994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.504021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.504135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.846 [2024-07-15 14:05:39.504163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.846 qpair failed and we were unable to recover it. 00:26:44.846 [2024-07-15 14:05:39.504282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.847 [2024-07-15 14:05:39.504308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.847 qpair failed and we were unable to recover it. 00:26:44.847 [2024-07-15 14:05:39.504529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.847 [2024-07-15 14:05:39.504555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.847 qpair failed and we were unable to recover it. 00:26:44.847 [2024-07-15 14:05:39.504756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.847 [2024-07-15 14:05:39.504797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.847 qpair failed and we were unable to recover it. 00:26:44.847 [2024-07-15 14:05:39.504989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.847 [2024-07-15 14:05:39.505015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.847 qpair failed and we were unable to recover it. 00:26:44.847 [2024-07-15 14:05:39.505189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.847 [2024-07-15 14:05:39.505225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.847 qpair failed and we were unable to recover it. 00:26:44.847 [2024-07-15 14:05:39.505376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.847 [2024-07-15 14:05:39.505403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.847 qpair failed and we were unable to recover it. 00:26:44.847 [2024-07-15 14:05:39.505590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.847 [2024-07-15 14:05:39.505616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.847 qpair failed and we were unable to recover it. 00:26:44.847 [2024-07-15 14:05:39.505724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.847 [2024-07-15 14:05:39.505757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.847 qpair failed and we were unable to recover it. 00:26:44.847 [2024-07-15 14:05:39.505886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.847 [2024-07-15 14:05:39.505912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.847 qpair failed and we were unable to recover it. 00:26:44.847 [2024-07-15 14:05:39.506091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.847 [2024-07-15 14:05:39.506118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.847 qpair failed and we were unable to recover it. 00:26:44.847 [2024-07-15 14:05:39.506258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.847 [2024-07-15 14:05:39.506285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.847 qpair failed and we were unable to recover it. 00:26:44.847 [2024-07-15 14:05:39.506449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.847 [2024-07-15 14:05:39.506475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dbc000b90 with addr=10.0.0.2, port=4420 00:26:44.847 qpair failed and we were unable to recover it. 00:26:44.847 [2024-07-15 14:05:39.506710] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:44.847 [2024-07-15 14:05:39.509189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.847 [2024-07-15 14:05:39.509325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.847 [2024-07-15 14:05:39.509355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.847 [2024-07-15 14:05:39.509372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.847 [2024-07-15 14:05:39.509385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:44.847 [2024-07-15 14:05:39.509420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:44.847 qpair failed and we were unable to recover it. 00:26:44.847 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.847 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:44.847 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.847 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:44.847 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.847 14:05:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3861819 00:26:44.847 [2024-07-15 14:05:39.519111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.847 [2024-07-15 14:05:39.519233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.847 [2024-07-15 14:05:39.519261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.847 [2024-07-15 14:05:39.519277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.847 [2024-07-15 14:05:39.519291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:44.847 [2024-07-15 14:05:39.519321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:44.847 qpair failed and we were unable to recover it. 00:26:44.847 [2024-07-15 14:05:39.529092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.847 [2024-07-15 14:05:39.529187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.847 [2024-07-15 14:05:39.529214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.847 [2024-07-15 14:05:39.529230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.847 [2024-07-15 14:05:39.529243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:44.847 [2024-07-15 14:05:39.529273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:44.847 qpair failed and we were unable to recover it. 00:26:44.847 [2024-07-15 14:05:39.539069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.847 [2024-07-15 14:05:39.539179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.847 [2024-07-15 14:05:39.539205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.847 [2024-07-15 14:05:39.539220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.847 [2024-07-15 14:05:39.539234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:44.847 [2024-07-15 14:05:39.539264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:44.847 qpair failed and we were unable to recover it. 00:26:44.847 [2024-07-15 14:05:39.549105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.847 [2024-07-15 14:05:39.549205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.847 [2024-07-15 14:05:39.549230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.847 [2024-07-15 14:05:39.549245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.847 [2024-07-15 14:05:39.549257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:44.847 [2024-07-15 14:05:39.549291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:44.847 qpair failed and we were unable to recover it. 00:26:44.847 [2024-07-15 14:05:39.559150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.847 [2024-07-15 14:05:39.559265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.847 [2024-07-15 14:05:39.559290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.847 [2024-07-15 14:05:39.559308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.847 [2024-07-15 14:05:39.559321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:44.847 [2024-07-15 14:05:39.559350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:44.847 qpair failed and we were unable to recover it. 00:26:44.847 [2024-07-15 14:05:39.569126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.847 [2024-07-15 14:05:39.569224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.847 [2024-07-15 14:05:39.569250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.847 [2024-07-15 14:05:39.569266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.847 [2024-07-15 14:05:39.569279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:44.847 [2024-07-15 14:05:39.569308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:44.847 qpair failed and we were unable to recover it. 00:26:44.847 [2024-07-15 14:05:39.579153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.847 [2024-07-15 14:05:39.579256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.847 [2024-07-15 14:05:39.579281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.847 [2024-07-15 14:05:39.579296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.847 [2024-07-15 14:05:39.579309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:44.847 [2024-07-15 14:05:39.579339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:44.847 qpair failed and we were unable to recover it. 00:26:44.847 [2024-07-15 14:05:39.589166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.847 [2024-07-15 14:05:39.589266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.847 [2024-07-15 14:05:39.589291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.847 [2024-07-15 14:05:39.589306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.847 [2024-07-15 14:05:39.589319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:44.847 [2024-07-15 14:05:39.589348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:44.847 qpair failed and we were unable to recover it. 00:26:44.847 [2024-07-15 14:05:39.599214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.847 [2024-07-15 14:05:39.599336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.848 [2024-07-15 14:05:39.599361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.848 [2024-07-15 14:05:39.599376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.848 [2024-07-15 14:05:39.599389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:44.848 [2024-07-15 14:05:39.599419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:44.848 qpair failed and we were unable to recover it. 00:26:44.848 [2024-07-15 14:05:39.609267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.848 [2024-07-15 14:05:39.609362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.848 [2024-07-15 14:05:39.609387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.848 [2024-07-15 14:05:39.609401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.848 [2024-07-15 14:05:39.609414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:44.848 [2024-07-15 14:05:39.609443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:44.848 qpair failed and we were unable to recover it. 00:26:44.848 [2024-07-15 14:05:39.619265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.848 [2024-07-15 14:05:39.619366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.848 [2024-07-15 14:05:39.619391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.848 [2024-07-15 14:05:39.619406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.848 [2024-07-15 14:05:39.619418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:44.848 [2024-07-15 14:05:39.619448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:44.848 qpair failed and we were unable to recover it. 00:26:44.848 [2024-07-15 14:05:39.629286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.848 [2024-07-15 14:05:39.629390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.848 [2024-07-15 14:05:39.629415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.848 [2024-07-15 14:05:39.629430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.848 [2024-07-15 14:05:39.629443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:44.848 [2024-07-15 14:05:39.629473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:44.848 qpair failed and we were unable to recover it. 00:26:44.848 [2024-07-15 14:05:39.639391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:44.848 [2024-07-15 14:05:39.639503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:44.848 [2024-07-15 14:05:39.639529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:44.848 [2024-07-15 14:05:39.639545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:44.848 [2024-07-15 14:05:39.639568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:44.848 [2024-07-15 14:05:39.639601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:44.848 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 14:05:39.649356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.108 [2024-07-15 14:05:39.649455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.108 [2024-07-15 14:05:39.649480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.108 [2024-07-15 14:05:39.649495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.108 [2024-07-15 14:05:39.649508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.108 [2024-07-15 14:05:39.649538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 14:05:39.659333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.108 [2024-07-15 14:05:39.659477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.108 [2024-07-15 14:05:39.659501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.108 [2024-07-15 14:05:39.659517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.108 [2024-07-15 14:05:39.659530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.108 [2024-07-15 14:05:39.659559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 14:05:39.669409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.108 [2024-07-15 14:05:39.669503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.108 [2024-07-15 14:05:39.669527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.108 [2024-07-15 14:05:39.669542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.108 [2024-07-15 14:05:39.669554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.108 [2024-07-15 14:05:39.669584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 14:05:39.679447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.108 [2024-07-15 14:05:39.679541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.108 [2024-07-15 14:05:39.679566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.108 [2024-07-15 14:05:39.679580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.108 [2024-07-15 14:05:39.679593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.108 [2024-07-15 14:05:39.679624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 14:05:39.689472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.108 [2024-07-15 14:05:39.689566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.108 [2024-07-15 14:05:39.689591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.108 [2024-07-15 14:05:39.689606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.108 [2024-07-15 14:05:39.689619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.108 [2024-07-15 14:05:39.689648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 14:05:39.699516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.108 [2024-07-15 14:05:39.699640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.108 [2024-07-15 14:05:39.699664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.108 [2024-07-15 14:05:39.699680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.108 [2024-07-15 14:05:39.699693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.108 [2024-07-15 14:05:39.699747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 14:05:39.709509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.108 [2024-07-15 14:05:39.709613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.108 [2024-07-15 14:05:39.709638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.108 [2024-07-15 14:05:39.709652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.108 [2024-07-15 14:05:39.709665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.108 [2024-07-15 14:05:39.709696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.108 qpair failed and we were unable to recover it. 00:26:45.108 [2024-07-15 14:05:39.719553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.109 [2024-07-15 14:05:39.719647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.109 [2024-07-15 14:05:39.719671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.109 [2024-07-15 14:05:39.719686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.109 [2024-07-15 14:05:39.719698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.109 [2024-07-15 14:05:39.719750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 14:05:39.729580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.109 [2024-07-15 14:05:39.729672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.109 [2024-07-15 14:05:39.729696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.109 [2024-07-15 14:05:39.729734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.109 [2024-07-15 14:05:39.729761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.109 [2024-07-15 14:05:39.729794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 14:05:39.739540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.109 [2024-07-15 14:05:39.739643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.109 [2024-07-15 14:05:39.739667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.109 [2024-07-15 14:05:39.739681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.109 [2024-07-15 14:05:39.739694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.109 [2024-07-15 14:05:39.739746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 14:05:39.749636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.109 [2024-07-15 14:05:39.749756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.109 [2024-07-15 14:05:39.749789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.109 [2024-07-15 14:05:39.749806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.109 [2024-07-15 14:05:39.749820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.109 [2024-07-15 14:05:39.749851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 14:05:39.759673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.109 [2024-07-15 14:05:39.759784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.109 [2024-07-15 14:05:39.759810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.109 [2024-07-15 14:05:39.759825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.109 [2024-07-15 14:05:39.759838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.109 [2024-07-15 14:05:39.759869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 14:05:39.769698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.109 [2024-07-15 14:05:39.769820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.109 [2024-07-15 14:05:39.769846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.109 [2024-07-15 14:05:39.769861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.109 [2024-07-15 14:05:39.769875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.109 [2024-07-15 14:05:39.769905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 14:05:39.779667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.109 [2024-07-15 14:05:39.779829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.109 [2024-07-15 14:05:39.779854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.109 [2024-07-15 14:05:39.779869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.109 [2024-07-15 14:05:39.779882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.109 [2024-07-15 14:05:39.779912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 14:05:39.789777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.109 [2024-07-15 14:05:39.789882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.109 [2024-07-15 14:05:39.789907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.109 [2024-07-15 14:05:39.789923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.109 [2024-07-15 14:05:39.789936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.109 [2024-07-15 14:05:39.789966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 14:05:39.799806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.109 [2024-07-15 14:05:39.799975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.109 [2024-07-15 14:05:39.800000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.109 [2024-07-15 14:05:39.800015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.109 [2024-07-15 14:05:39.800043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.109 [2024-07-15 14:05:39.800074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 14:05:39.809788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.109 [2024-07-15 14:05:39.809899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.109 [2024-07-15 14:05:39.809924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.109 [2024-07-15 14:05:39.809939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.109 [2024-07-15 14:05:39.809953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.109 [2024-07-15 14:05:39.809984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 14:05:39.819862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.109 [2024-07-15 14:05:39.819972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.109 [2024-07-15 14:05:39.820002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.109 [2024-07-15 14:05:39.820018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.109 [2024-07-15 14:05:39.820031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.109 [2024-07-15 14:05:39.820077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 14:05:39.829888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.109 [2024-07-15 14:05:39.829990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.109 [2024-07-15 14:05:39.830015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.109 [2024-07-15 14:05:39.830045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.109 [2024-07-15 14:05:39.830058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.109 [2024-07-15 14:05:39.830089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 14:05:39.839888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.109 [2024-07-15 14:05:39.839987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.109 [2024-07-15 14:05:39.840013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.109 [2024-07-15 14:05:39.840042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.109 [2024-07-15 14:05:39.840056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.109 [2024-07-15 14:05:39.840087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.109 qpair failed and we were unable to recover it. 00:26:45.109 [2024-07-15 14:05:39.849944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.109 [2024-07-15 14:05:39.850057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.109 [2024-07-15 14:05:39.850082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.110 [2024-07-15 14:05:39.850098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.110 [2024-07-15 14:05:39.850110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.110 [2024-07-15 14:05:39.850140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 14:05:39.859966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.110 [2024-07-15 14:05:39.860093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.110 [2024-07-15 14:05:39.860117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.110 [2024-07-15 14:05:39.860133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.110 [2024-07-15 14:05:39.860146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.110 [2024-07-15 14:05:39.860181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 14:05:39.869934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.110 [2024-07-15 14:05:39.870053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.110 [2024-07-15 14:05:39.870078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.110 [2024-07-15 14:05:39.870092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.110 [2024-07-15 14:05:39.870105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.110 [2024-07-15 14:05:39.870135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 14:05:39.880037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.110 [2024-07-15 14:05:39.880163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.110 [2024-07-15 14:05:39.880189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.110 [2024-07-15 14:05:39.880205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.110 [2024-07-15 14:05:39.880219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.110 [2024-07-15 14:05:39.880249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 14:05:39.890055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.110 [2024-07-15 14:05:39.890152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.110 [2024-07-15 14:05:39.890177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.110 [2024-07-15 14:05:39.890192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.110 [2024-07-15 14:05:39.890205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.110 [2024-07-15 14:05:39.890234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 14:05:39.900044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.110 [2024-07-15 14:05:39.900161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.110 [2024-07-15 14:05:39.900187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.110 [2024-07-15 14:05:39.900202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.110 [2024-07-15 14:05:39.900216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.110 [2024-07-15 14:05:39.900244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 14:05:39.910011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.110 [2024-07-15 14:05:39.910123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.110 [2024-07-15 14:05:39.910154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.110 [2024-07-15 14:05:39.910170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.110 [2024-07-15 14:05:39.910184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.110 [2024-07-15 14:05:39.910213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 14:05:39.920079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.110 [2024-07-15 14:05:39.920191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.110 [2024-07-15 14:05:39.920215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.110 [2024-07-15 14:05:39.920231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.110 [2024-07-15 14:05:39.920243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.110 [2024-07-15 14:05:39.920272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 14:05:39.930089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.110 [2024-07-15 14:05:39.930189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.110 [2024-07-15 14:05:39.930215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.110 [2024-07-15 14:05:39.930230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.110 [2024-07-15 14:05:39.930243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.110 [2024-07-15 14:05:39.930272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.110 [2024-07-15 14:05:39.940133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.110 [2024-07-15 14:05:39.940246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.110 [2024-07-15 14:05:39.940270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.110 [2024-07-15 14:05:39.940285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.110 [2024-07-15 14:05:39.940298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.110 [2024-07-15 14:05:39.940327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.110 qpair failed and we were unable to recover it. 00:26:45.371 [2024-07-15 14:05:39.950143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.371 [2024-07-15 14:05:39.950275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.371 [2024-07-15 14:05:39.950301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.371 [2024-07-15 14:05:39.950317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.371 [2024-07-15 14:05:39.950330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.371 [2024-07-15 14:05:39.950364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-07-15 14:05:39.960201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.371 [2024-07-15 14:05:39.960320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.371 [2024-07-15 14:05:39.960345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.371 [2024-07-15 14:05:39.960360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.371 [2024-07-15 14:05:39.960372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.371 [2024-07-15 14:05:39.960402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-07-15 14:05:39.970189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.371 [2024-07-15 14:05:39.970289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.371 [2024-07-15 14:05:39.970314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.371 [2024-07-15 14:05:39.970328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.371 [2024-07-15 14:05:39.970341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.371 [2024-07-15 14:05:39.970370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-07-15 14:05:39.980244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.371 [2024-07-15 14:05:39.980344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.371 [2024-07-15 14:05:39.980368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.371 [2024-07-15 14:05:39.980383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.371 [2024-07-15 14:05:39.980395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.371 [2024-07-15 14:05:39.980425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-07-15 14:05:39.990247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.371 [2024-07-15 14:05:39.990343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.371 [2024-07-15 14:05:39.990367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.371 [2024-07-15 14:05:39.990382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.371 [2024-07-15 14:05:39.990394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.371 [2024-07-15 14:05:39.990423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-07-15 14:05:40.000304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.371 [2024-07-15 14:05:40.000402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.371 [2024-07-15 14:05:40.000432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.371 [2024-07-15 14:05:40.000448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.371 [2024-07-15 14:05:40.000461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.371 [2024-07-15 14:05:40.000495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-07-15 14:05:40.010317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.371 [2024-07-15 14:05:40.010418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.371 [2024-07-15 14:05:40.010446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.371 [2024-07-15 14:05:40.010461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.371 [2024-07-15 14:05:40.010475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.371 [2024-07-15 14:05:40.010520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-07-15 14:05:40.020367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.372 [2024-07-15 14:05:40.020497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.372 [2024-07-15 14:05:40.020524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.372 [2024-07-15 14:05:40.020540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.372 [2024-07-15 14:05:40.020554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.372 [2024-07-15 14:05:40.020600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-07-15 14:05:40.030370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.372 [2024-07-15 14:05:40.030522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.372 [2024-07-15 14:05:40.030550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.372 [2024-07-15 14:05:40.030566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.372 [2024-07-15 14:05:40.030579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.372 [2024-07-15 14:05:40.030610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-07-15 14:05:40.040491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.372 [2024-07-15 14:05:40.040597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.372 [2024-07-15 14:05:40.040626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.372 [2024-07-15 14:05:40.040641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.372 [2024-07-15 14:05:40.040660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.372 [2024-07-15 14:05:40.040692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-07-15 14:05:40.050492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.372 [2024-07-15 14:05:40.050606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.372 [2024-07-15 14:05:40.050632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.372 [2024-07-15 14:05:40.050647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.372 [2024-07-15 14:05:40.050659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.372 [2024-07-15 14:05:40.050689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-07-15 14:05:40.060457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.372 [2024-07-15 14:05:40.060570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.372 [2024-07-15 14:05:40.060596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.372 [2024-07-15 14:05:40.060611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.372 [2024-07-15 14:05:40.060623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.372 [2024-07-15 14:05:40.060653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-07-15 14:05:40.070477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.372 [2024-07-15 14:05:40.070578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.372 [2024-07-15 14:05:40.070604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.372 [2024-07-15 14:05:40.070618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.372 [2024-07-15 14:05:40.070631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.372 [2024-07-15 14:05:40.070660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-07-15 14:05:40.080513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.372 [2024-07-15 14:05:40.080618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.372 [2024-07-15 14:05:40.080642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.372 [2024-07-15 14:05:40.080657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.372 [2024-07-15 14:05:40.080669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.372 [2024-07-15 14:05:40.080698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-07-15 14:05:40.090557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.372 [2024-07-15 14:05:40.090660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.372 [2024-07-15 14:05:40.090686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.372 [2024-07-15 14:05:40.090701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.372 [2024-07-15 14:05:40.090713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.372 [2024-07-15 14:05:40.090778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-07-15 14:05:40.100606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.372 [2024-07-15 14:05:40.100745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.372 [2024-07-15 14:05:40.100772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.372 [2024-07-15 14:05:40.100787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.372 [2024-07-15 14:05:40.100800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.372 [2024-07-15 14:05:40.100831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-07-15 14:05:40.110588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.372 [2024-07-15 14:05:40.110685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.372 [2024-07-15 14:05:40.110710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.372 [2024-07-15 14:05:40.110746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.372 [2024-07-15 14:05:40.110761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.372 [2024-07-15 14:05:40.110792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-07-15 14:05:40.120626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.373 [2024-07-15 14:05:40.120748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.373 [2024-07-15 14:05:40.120773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.373 [2024-07-15 14:05:40.120788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.373 [2024-07-15 14:05:40.120801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.373 [2024-07-15 14:05:40.120839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-07-15 14:05:40.130679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.373 [2024-07-15 14:05:40.130802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.373 [2024-07-15 14:05:40.130827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.373 [2024-07-15 14:05:40.130847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.373 [2024-07-15 14:05:40.130861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.373 [2024-07-15 14:05:40.130891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-07-15 14:05:40.140709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.373 [2024-07-15 14:05:40.140837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.373 [2024-07-15 14:05:40.140862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.373 [2024-07-15 14:05:40.140877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.373 [2024-07-15 14:05:40.140901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.373 [2024-07-15 14:05:40.140931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-07-15 14:05:40.150754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.373 [2024-07-15 14:05:40.150856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.373 [2024-07-15 14:05:40.150881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.373 [2024-07-15 14:05:40.150897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.373 [2024-07-15 14:05:40.150910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.373 [2024-07-15 14:05:40.150940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-07-15 14:05:40.160777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.373 [2024-07-15 14:05:40.160911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.373 [2024-07-15 14:05:40.160936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.373 [2024-07-15 14:05:40.160954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.373 [2024-07-15 14:05:40.160968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.373 [2024-07-15 14:05:40.160998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-07-15 14:05:40.170801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.373 [2024-07-15 14:05:40.170904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.373 [2024-07-15 14:05:40.170928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.373 [2024-07-15 14:05:40.170952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.373 [2024-07-15 14:05:40.170965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.373 [2024-07-15 14:05:40.170996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-07-15 14:05:40.180916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.373 [2024-07-15 14:05:40.181051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.373 [2024-07-15 14:05:40.181075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.373 [2024-07-15 14:05:40.181089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.373 [2024-07-15 14:05:40.181102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.373 [2024-07-15 14:05:40.181131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-07-15 14:05:40.190862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.373 [2024-07-15 14:05:40.190966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.373 [2024-07-15 14:05:40.190990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.373 [2024-07-15 14:05:40.191005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.373 [2024-07-15 14:05:40.191018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.373 [2024-07-15 14:05:40.191063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-07-15 14:05:40.200960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.373 [2024-07-15 14:05:40.201075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.373 [2024-07-15 14:05:40.201098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.373 [2024-07-15 14:05:40.201113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.373 [2024-07-15 14:05:40.201125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.373 [2024-07-15 14:05:40.201163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.632 [2024-07-15 14:05:40.210944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.632 [2024-07-15 14:05:40.211042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.632 [2024-07-15 14:05:40.211067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.632 [2024-07-15 14:05:40.211083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.632 [2024-07-15 14:05:40.211096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.632 [2024-07-15 14:05:40.211142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.632 qpair failed and we were unable to recover it. 00:26:45.632 [2024-07-15 14:05:40.220978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.632 [2024-07-15 14:05:40.221138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.632 [2024-07-15 14:05:40.221163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.632 [2024-07-15 14:05:40.221183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.632 [2024-07-15 14:05:40.221196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.633 [2024-07-15 14:05:40.221226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.633 qpair failed and we were unable to recover it. 00:26:45.633 [2024-07-15 14:05:40.230988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.633 [2024-07-15 14:05:40.231101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.633 [2024-07-15 14:05:40.231124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.633 [2024-07-15 14:05:40.231139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.633 [2024-07-15 14:05:40.231152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.633 [2024-07-15 14:05:40.231180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.633 qpair failed and we were unable to recover it. 00:26:45.633 [2024-07-15 14:05:40.241093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.633 [2024-07-15 14:05:40.241210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.633 [2024-07-15 14:05:40.241234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.633 [2024-07-15 14:05:40.241249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.633 [2024-07-15 14:05:40.241262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.633 [2024-07-15 14:05:40.241292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.633 qpair failed and we were unable to recover it. 00:26:45.633 [2024-07-15 14:05:40.251001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.633 [2024-07-15 14:05:40.251113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.633 [2024-07-15 14:05:40.251137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.633 [2024-07-15 14:05:40.251151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.633 [2024-07-15 14:05:40.251163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.633 [2024-07-15 14:05:40.251192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.633 qpair failed and we were unable to recover it. 00:26:45.633 [2024-07-15 14:05:40.261141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.633 [2024-07-15 14:05:40.261245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.633 [2024-07-15 14:05:40.261269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.633 [2024-07-15 14:05:40.261284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.633 [2024-07-15 14:05:40.261297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.633 [2024-07-15 14:05:40.261331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.633 qpair failed and we were unable to recover it. 00:26:45.633 [2024-07-15 14:05:40.271116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.633 [2024-07-15 14:05:40.271215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.633 [2024-07-15 14:05:40.271240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.633 [2024-07-15 14:05:40.271255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.633 [2024-07-15 14:05:40.271267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:45.633 [2024-07-15 14:05:40.271296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.633 qpair failed and we were unable to recover it. 00:26:45.633 [2024-07-15 14:05:40.281178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.633 [2024-07-15 14:05:40.281277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.633 [2024-07-15 14:05:40.281307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.633 [2024-07-15 14:05:40.281322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.633 [2024-07-15 14:05:40.281336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.633 [2024-07-15 14:05:40.281367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.633 qpair failed and we were unable to recover it. 00:26:45.633 [2024-07-15 14:05:40.291151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.633 [2024-07-15 14:05:40.291249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.633 [2024-07-15 14:05:40.291274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.633 [2024-07-15 14:05:40.291304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.633 [2024-07-15 14:05:40.291318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.633 [2024-07-15 14:05:40.291349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.633 qpair failed and we were unable to recover it. 00:26:45.633 [2024-07-15 14:05:40.301305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.633 [2024-07-15 14:05:40.301444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.633 [2024-07-15 14:05:40.301471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.633 [2024-07-15 14:05:40.301487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.633 [2024-07-15 14:05:40.301500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.633 [2024-07-15 14:05:40.301529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.633 qpair failed and we were unable to recover it. 00:26:45.633 [2024-07-15 14:05:40.311249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.633 [2024-07-15 14:05:40.311346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.633 [2024-07-15 14:05:40.311376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.633 [2024-07-15 14:05:40.311392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.633 [2024-07-15 14:05:40.311405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.633 [2024-07-15 14:05:40.311432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.633 qpair failed and we were unable to recover it. 00:26:45.633 [2024-07-15 14:05:40.321298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.633 [2024-07-15 14:05:40.321390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.633 [2024-07-15 14:05:40.321414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.633 [2024-07-15 14:05:40.321429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.633 [2024-07-15 14:05:40.321441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.633 [2024-07-15 14:05:40.321469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.633 qpair failed and we were unable to recover it. 00:26:45.633 [2024-07-15 14:05:40.331315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.633 [2024-07-15 14:05:40.331410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.633 [2024-07-15 14:05:40.331435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.633 [2024-07-15 14:05:40.331449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.633 [2024-07-15 14:05:40.331461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.633 [2024-07-15 14:05:40.331489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.633 qpair failed and we were unable to recover it. 00:26:45.633 [2024-07-15 14:05:40.341314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.633 [2024-07-15 14:05:40.341417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.633 [2024-07-15 14:05:40.341441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.633 [2024-07-15 14:05:40.341455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.633 [2024-07-15 14:05:40.341468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.633 [2024-07-15 14:05:40.341496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.633 qpair failed and we were unable to recover it. 00:26:45.633 [2024-07-15 14:05:40.351320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.633 [2024-07-15 14:05:40.351421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.633 [2024-07-15 14:05:40.351445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.633 [2024-07-15 14:05:40.351460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.633 [2024-07-15 14:05:40.351472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.633 [2024-07-15 14:05:40.351511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.633 qpair failed and we were unable to recover it. 00:26:45.633 [2024-07-15 14:05:40.361347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.633 [2024-07-15 14:05:40.361441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.633 [2024-07-15 14:05:40.361465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.633 [2024-07-15 14:05:40.361480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.633 [2024-07-15 14:05:40.361492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.633 [2024-07-15 14:05:40.361519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.633 qpair failed and we were unable to recover it. 00:26:45.633 [2024-07-15 14:05:40.371382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.633 [2024-07-15 14:05:40.371519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.633 [2024-07-15 14:05:40.371543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.633 [2024-07-15 14:05:40.371558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.633 [2024-07-15 14:05:40.371571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.633 [2024-07-15 14:05:40.371597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.633 qpair failed and we were unable to recover it. 00:26:45.633 [2024-07-15 14:05:40.381446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.633 [2024-07-15 14:05:40.381598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.633 [2024-07-15 14:05:40.381622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.633 [2024-07-15 14:05:40.381636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.633 [2024-07-15 14:05:40.381656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.633 [2024-07-15 14:05:40.381685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.633 qpair failed and we were unable to recover it. 00:26:45.633 [2024-07-15 14:05:40.391477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.633 [2024-07-15 14:05:40.391578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.633 [2024-07-15 14:05:40.391601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.633 [2024-07-15 14:05:40.391616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.633 [2024-07-15 14:05:40.391628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.633 [2024-07-15 14:05:40.391657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.633 qpair failed and we were unable to recover it. 00:26:45.633 [2024-07-15 14:05:40.401454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.633 [2024-07-15 14:05:40.401548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.633 [2024-07-15 14:05:40.401576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.633 [2024-07-15 14:05:40.401592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.633 [2024-07-15 14:05:40.401605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.633 [2024-07-15 14:05:40.401633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.633 qpair failed and we were unable to recover it. 00:26:45.633 [2024-07-15 14:05:40.411452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.633 [2024-07-15 14:05:40.411564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.633 [2024-07-15 14:05:40.411588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.633 [2024-07-15 14:05:40.411603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.633 [2024-07-15 14:05:40.411616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.633 [2024-07-15 14:05:40.411644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.633 qpair failed and we were unable to recover it. 00:26:45.633 [2024-07-15 14:05:40.421530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.633 [2024-07-15 14:05:40.421667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.633 [2024-07-15 14:05:40.421690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.633 [2024-07-15 14:05:40.421705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.633 [2024-07-15 14:05:40.421717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.633 [2024-07-15 14:05:40.421876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.633 qpair failed and we were unable to recover it. 00:26:45.633 [2024-07-15 14:05:40.431577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.633 [2024-07-15 14:05:40.431700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.633 [2024-07-15 14:05:40.431724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.633 [2024-07-15 14:05:40.431763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.633 [2024-07-15 14:05:40.431779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.633 [2024-07-15 14:05:40.431809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.633 qpair failed and we were unable to recover it. 00:26:45.633 [2024-07-15 14:05:40.441567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.633 [2024-07-15 14:05:40.441669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.633 [2024-07-15 14:05:40.441692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.633 [2024-07-15 14:05:40.441707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.633 [2024-07-15 14:05:40.441720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.633 [2024-07-15 14:05:40.441777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.633 qpair failed and we were unable to recover it. 00:26:45.633 [2024-07-15 14:05:40.451537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.633 [2024-07-15 14:05:40.451631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.633 [2024-07-15 14:05:40.451655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.633 [2024-07-15 14:05:40.451670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.633 [2024-07-15 14:05:40.451682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.633 [2024-07-15 14:05:40.451710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.633 qpair failed and we were unable to recover it. 00:26:45.633 [2024-07-15 14:05:40.461681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.633 [2024-07-15 14:05:40.461843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.633 [2024-07-15 14:05:40.461868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.633 [2024-07-15 14:05:40.461883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.633 [2024-07-15 14:05:40.461895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.633 [2024-07-15 14:05:40.461924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.633 qpair failed and we were unable to recover it. 00:26:45.633 [2024-07-15 14:05:40.471668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.633 [2024-07-15 14:05:40.471819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.633 [2024-07-15 14:05:40.471844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.633 [2024-07-15 14:05:40.471859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.633 [2024-07-15 14:05:40.471872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.633 [2024-07-15 14:05:40.471901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.633 qpair failed and we were unable to recover it. 00:26:45.891 [2024-07-15 14:05:40.481639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.891 [2024-07-15 14:05:40.481778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.891 [2024-07-15 14:05:40.481805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.891 [2024-07-15 14:05:40.481820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.891 [2024-07-15 14:05:40.481834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.891 [2024-07-15 14:05:40.481863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.891 qpair failed and we were unable to recover it. 00:26:45.891 [2024-07-15 14:05:40.491658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.891 [2024-07-15 14:05:40.491779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.891 [2024-07-15 14:05:40.491810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.891 [2024-07-15 14:05:40.491827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.891 [2024-07-15 14:05:40.491839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.891 [2024-07-15 14:05:40.491869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.891 qpair failed and we were unable to recover it. 00:26:45.891 [2024-07-15 14:05:40.501705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.891 [2024-07-15 14:05:40.501832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.891 [2024-07-15 14:05:40.501859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.891 [2024-07-15 14:05:40.501874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.891 [2024-07-15 14:05:40.501887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.891 [2024-07-15 14:05:40.501915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.891 qpair failed and we were unable to recover it. 00:26:45.891 [2024-07-15 14:05:40.511716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.891 [2024-07-15 14:05:40.511836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.891 [2024-07-15 14:05:40.511861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.891 [2024-07-15 14:05:40.511877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.891 [2024-07-15 14:05:40.511889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.891 [2024-07-15 14:05:40.511918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.892 qpair failed and we were unable to recover it. 00:26:45.892 [2024-07-15 14:05:40.521773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.892 [2024-07-15 14:05:40.521889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.892 [2024-07-15 14:05:40.521916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.892 [2024-07-15 14:05:40.521931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.892 [2024-07-15 14:05:40.521943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.892 [2024-07-15 14:05:40.521973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.892 qpair failed and we were unable to recover it. 00:26:45.892 [2024-07-15 14:05:40.531797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.892 [2024-07-15 14:05:40.531904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.892 [2024-07-15 14:05:40.531930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.892 [2024-07-15 14:05:40.531945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.892 [2024-07-15 14:05:40.531963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.892 [2024-07-15 14:05:40.531992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.892 qpair failed and we were unable to recover it. 00:26:45.892 [2024-07-15 14:05:40.541828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.892 [2024-07-15 14:05:40.541932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.892 [2024-07-15 14:05:40.541956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.892 [2024-07-15 14:05:40.541971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.892 [2024-07-15 14:05:40.541984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.892 [2024-07-15 14:05:40.542013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.892 qpair failed and we were unable to recover it. 00:26:45.892 [2024-07-15 14:05:40.551845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.892 [2024-07-15 14:05:40.551947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.892 [2024-07-15 14:05:40.551974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.892 [2024-07-15 14:05:40.551989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.892 [2024-07-15 14:05:40.552003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.892 [2024-07-15 14:05:40.552047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.892 qpair failed and we were unable to recover it. 00:26:45.892 [2024-07-15 14:05:40.561899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.892 [2024-07-15 14:05:40.562011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.892 [2024-07-15 14:05:40.562037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.892 [2024-07-15 14:05:40.562067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.892 [2024-07-15 14:05:40.562080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.892 [2024-07-15 14:05:40.562109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.892 qpair failed and we were unable to recover it. 00:26:45.892 [2024-07-15 14:05:40.571923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.892 [2024-07-15 14:05:40.572020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.892 [2024-07-15 14:05:40.572061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.892 [2024-07-15 14:05:40.572076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.892 [2024-07-15 14:05:40.572089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.892 [2024-07-15 14:05:40.572117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.892 qpair failed and we were unable to recover it. 00:26:45.892 [2024-07-15 14:05:40.581980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.892 [2024-07-15 14:05:40.582100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.892 [2024-07-15 14:05:40.582125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.892 [2024-07-15 14:05:40.582140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.892 [2024-07-15 14:05:40.582153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.892 [2024-07-15 14:05:40.582181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.892 qpair failed and we were unable to recover it. 00:26:45.892 [2024-07-15 14:05:40.591974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.892 [2024-07-15 14:05:40.592087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.892 [2024-07-15 14:05:40.592114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.892 [2024-07-15 14:05:40.592128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.892 [2024-07-15 14:05:40.592140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.892 [2024-07-15 14:05:40.592170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.892 qpair failed and we were unable to recover it. 00:26:45.892 [2024-07-15 14:05:40.602029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.892 [2024-07-15 14:05:40.602148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.892 [2024-07-15 14:05:40.602174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.892 [2024-07-15 14:05:40.602189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.892 [2024-07-15 14:05:40.602201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.892 [2024-07-15 14:05:40.602229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.892 qpair failed and we were unable to recover it. 00:26:45.892 [2024-07-15 14:05:40.612041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.892 [2024-07-15 14:05:40.612150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.892 [2024-07-15 14:05:40.612175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.892 [2024-07-15 14:05:40.612190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.892 [2024-07-15 14:05:40.612203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.892 [2024-07-15 14:05:40.612231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.892 qpair failed and we were unable to recover it. 00:26:45.892 [2024-07-15 14:05:40.622083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.892 [2024-07-15 14:05:40.622185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.892 [2024-07-15 14:05:40.622209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.892 [2024-07-15 14:05:40.622224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.892 [2024-07-15 14:05:40.622241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.892 [2024-07-15 14:05:40.622270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.892 qpair failed and we were unable to recover it. 00:26:45.892 [2024-07-15 14:05:40.632088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.892 [2024-07-15 14:05:40.632212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.892 [2024-07-15 14:05:40.632238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.892 [2024-07-15 14:05:40.632254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.892 [2024-07-15 14:05:40.632266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.892 [2024-07-15 14:05:40.632294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.892 qpair failed and we were unable to recover it. 00:26:45.892 [2024-07-15 14:05:40.642149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.892 [2024-07-15 14:05:40.642252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.892 [2024-07-15 14:05:40.642277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.892 [2024-07-15 14:05:40.642292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.892 [2024-07-15 14:05:40.642304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.892 [2024-07-15 14:05:40.642333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.892 qpair failed and we were unable to recover it. 00:26:45.892 [2024-07-15 14:05:40.652141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.892 [2024-07-15 14:05:40.652236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.892 [2024-07-15 14:05:40.652259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.892 [2024-07-15 14:05:40.652274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.892 [2024-07-15 14:05:40.652286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.892 [2024-07-15 14:05:40.652313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.892 qpair failed and we were unable to recover it. 00:26:45.892 [2024-07-15 14:05:40.662181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.892 [2024-07-15 14:05:40.662280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.893 [2024-07-15 14:05:40.662303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.893 [2024-07-15 14:05:40.662318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.893 [2024-07-15 14:05:40.662330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.893 [2024-07-15 14:05:40.662358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.893 qpair failed and we were unable to recover it. 00:26:45.893 [2024-07-15 14:05:40.672181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.893 [2024-07-15 14:05:40.672284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.893 [2024-07-15 14:05:40.672308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.893 [2024-07-15 14:05:40.672323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.893 [2024-07-15 14:05:40.672335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.893 [2024-07-15 14:05:40.672363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.893 qpair failed and we were unable to recover it. 00:26:45.893 [2024-07-15 14:05:40.682246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.893 [2024-07-15 14:05:40.682345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.893 [2024-07-15 14:05:40.682368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.893 [2024-07-15 14:05:40.682383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.893 [2024-07-15 14:05:40.682396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.893 [2024-07-15 14:05:40.682423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.893 qpair failed and we were unable to recover it. 00:26:45.893 [2024-07-15 14:05:40.692297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.893 [2024-07-15 14:05:40.692394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.893 [2024-07-15 14:05:40.692418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.893 [2024-07-15 14:05:40.692434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.893 [2024-07-15 14:05:40.692446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.893 [2024-07-15 14:05:40.692474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.893 qpair failed and we were unable to recover it. 00:26:45.893 [2024-07-15 14:05:40.702315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.893 [2024-07-15 14:05:40.702439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.893 [2024-07-15 14:05:40.702464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.893 [2024-07-15 14:05:40.702479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.893 [2024-07-15 14:05:40.702492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.893 [2024-07-15 14:05:40.702519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.893 qpair failed and we were unable to recover it. 00:26:45.893 [2024-07-15 14:05:40.712319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.893 [2024-07-15 14:05:40.712429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.893 [2024-07-15 14:05:40.712454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.893 [2024-07-15 14:05:40.712469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.893 [2024-07-15 14:05:40.712487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.893 [2024-07-15 14:05:40.712516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.893 qpair failed and we were unable to recover it. 00:26:45.893 [2024-07-15 14:05:40.722339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:45.893 [2024-07-15 14:05:40.722444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:45.893 [2024-07-15 14:05:40.722470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:45.893 [2024-07-15 14:05:40.722485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:45.893 [2024-07-15 14:05:40.722498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:45.893 [2024-07-15 14:05:40.722527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.893 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 14:05:40.732409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.152 [2024-07-15 14:05:40.732538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.152 [2024-07-15 14:05:40.732578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.152 [2024-07-15 14:05:40.732594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.152 [2024-07-15 14:05:40.732606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.152 [2024-07-15 14:05:40.732634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 14:05:40.742466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.152 [2024-07-15 14:05:40.742594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.152 [2024-07-15 14:05:40.742619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.152 [2024-07-15 14:05:40.742634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.152 [2024-07-15 14:05:40.742647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.152 [2024-07-15 14:05:40.742675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 14:05:40.752494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.152 [2024-07-15 14:05:40.752593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.152 [2024-07-15 14:05:40.752617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.152 [2024-07-15 14:05:40.752632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.152 [2024-07-15 14:05:40.752644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.152 [2024-07-15 14:05:40.752672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 14:05:40.762451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.152 [2024-07-15 14:05:40.762542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.152 [2024-07-15 14:05:40.762566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.152 [2024-07-15 14:05:40.762581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.152 [2024-07-15 14:05:40.762593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.152 [2024-07-15 14:05:40.762621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 14:05:40.772491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.152 [2024-07-15 14:05:40.772586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.152 [2024-07-15 14:05:40.772610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.152 [2024-07-15 14:05:40.772624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.152 [2024-07-15 14:05:40.772637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.152 [2024-07-15 14:05:40.772664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 14:05:40.782589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.152 [2024-07-15 14:05:40.782698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.152 [2024-07-15 14:05:40.782744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.152 [2024-07-15 14:05:40.782761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.152 [2024-07-15 14:05:40.782775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.152 [2024-07-15 14:05:40.782804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 14:05:40.792583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.152 [2024-07-15 14:05:40.792707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.152 [2024-07-15 14:05:40.792744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.152 [2024-07-15 14:05:40.792772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.152 [2024-07-15 14:05:40.792794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.152 [2024-07-15 14:05:40.792836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 14:05:40.802559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.152 [2024-07-15 14:05:40.802654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.152 [2024-07-15 14:05:40.802681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.152 [2024-07-15 14:05:40.802701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.152 [2024-07-15 14:05:40.802715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.152 [2024-07-15 14:05:40.802771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 14:05:40.812601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.152 [2024-07-15 14:05:40.812745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.152 [2024-07-15 14:05:40.812772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.152 [2024-07-15 14:05:40.812788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.152 [2024-07-15 14:05:40.812802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.152 [2024-07-15 14:05:40.812831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 14:05:40.822666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.152 [2024-07-15 14:05:40.822785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.152 [2024-07-15 14:05:40.822810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.152 [2024-07-15 14:05:40.822825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.152 [2024-07-15 14:05:40.822837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.152 [2024-07-15 14:05:40.822867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 14:05:40.832645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.152 [2024-07-15 14:05:40.832743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.152 [2024-07-15 14:05:40.832785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.152 [2024-07-15 14:05:40.832801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.152 [2024-07-15 14:05:40.832814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.152 [2024-07-15 14:05:40.832843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 14:05:40.842682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.152 [2024-07-15 14:05:40.842800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.152 [2024-07-15 14:05:40.842827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.152 [2024-07-15 14:05:40.842842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.152 [2024-07-15 14:05:40.842855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.152 [2024-07-15 14:05:40.842884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.152 qpair failed and we were unable to recover it. 00:26:46.152 [2024-07-15 14:05:40.852688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.152 [2024-07-15 14:05:40.852803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.152 [2024-07-15 14:05:40.852830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.153 [2024-07-15 14:05:40.852846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.153 [2024-07-15 14:05:40.852858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.153 [2024-07-15 14:05:40.852887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 14:05:40.862764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.153 [2024-07-15 14:05:40.862876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.153 [2024-07-15 14:05:40.862902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.153 [2024-07-15 14:05:40.862918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.153 [2024-07-15 14:05:40.862931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.153 [2024-07-15 14:05:40.862960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 14:05:40.872762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.153 [2024-07-15 14:05:40.872874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.153 [2024-07-15 14:05:40.872900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.153 [2024-07-15 14:05:40.872915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.153 [2024-07-15 14:05:40.872927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.153 [2024-07-15 14:05:40.872955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 14:05:40.882812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.153 [2024-07-15 14:05:40.882916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.153 [2024-07-15 14:05:40.882942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.153 [2024-07-15 14:05:40.882958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.153 [2024-07-15 14:05:40.882971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.153 [2024-07-15 14:05:40.882999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 14:05:40.892887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.153 [2024-07-15 14:05:40.892987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.153 [2024-07-15 14:05:40.893012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.153 [2024-07-15 14:05:40.893031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.153 [2024-07-15 14:05:40.893060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.153 [2024-07-15 14:05:40.893089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 14:05:40.902888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.153 [2024-07-15 14:05:40.902989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.153 [2024-07-15 14:05:40.903013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.153 [2024-07-15 14:05:40.903029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.153 [2024-07-15 14:05:40.903056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.153 [2024-07-15 14:05:40.903085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 14:05:40.912899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.153 [2024-07-15 14:05:40.913105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.153 [2024-07-15 14:05:40.913130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.153 [2024-07-15 14:05:40.913145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.153 [2024-07-15 14:05:40.913158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.153 [2024-07-15 14:05:40.913185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 14:05:40.922915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.153 [2024-07-15 14:05:40.923014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.153 [2024-07-15 14:05:40.923041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.153 [2024-07-15 14:05:40.923071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.153 [2024-07-15 14:05:40.923084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.153 [2024-07-15 14:05:40.923112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 14:05:40.932970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.153 [2024-07-15 14:05:40.933079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.153 [2024-07-15 14:05:40.933105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.153 [2024-07-15 14:05:40.933120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.153 [2024-07-15 14:05:40.933133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.153 [2024-07-15 14:05:40.933160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 14:05:40.942991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.153 [2024-07-15 14:05:40.943116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.153 [2024-07-15 14:05:40.943141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.153 [2024-07-15 14:05:40.943156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.153 [2024-07-15 14:05:40.943168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.153 [2024-07-15 14:05:40.943196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 14:05:40.953029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.153 [2024-07-15 14:05:40.953145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.153 [2024-07-15 14:05:40.953171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.153 [2024-07-15 14:05:40.953186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.153 [2024-07-15 14:05:40.953198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.153 [2024-07-15 14:05:40.953226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 14:05:40.963061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.153 [2024-07-15 14:05:40.963162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.153 [2024-07-15 14:05:40.963187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.153 [2024-07-15 14:05:40.963202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.153 [2024-07-15 14:05:40.963214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.153 [2024-07-15 14:05:40.963242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 14:05:40.973069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.153 [2024-07-15 14:05:40.973199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.153 [2024-07-15 14:05:40.973223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.153 [2024-07-15 14:05:40.973238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.153 [2024-07-15 14:05:40.973251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.153 [2024-07-15 14:05:40.973278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.153 [2024-07-15 14:05:40.983079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.153 [2024-07-15 14:05:40.983173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.153 [2024-07-15 14:05:40.983201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.153 [2024-07-15 14:05:40.983216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.153 [2024-07-15 14:05:40.983229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.153 [2024-07-15 14:05:40.983257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.153 qpair failed and we were unable to recover it. 00:26:46.413 [2024-07-15 14:05:40.993133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.413 [2024-07-15 14:05:40.993231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.413 [2024-07-15 14:05:40.993257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.413 [2024-07-15 14:05:40.993272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.413 [2024-07-15 14:05:40.993284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.413 [2024-07-15 14:05:40.993313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.413 qpair failed and we were unable to recover it. 00:26:46.413 [2024-07-15 14:05:41.003194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.413 [2024-07-15 14:05:41.003288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.413 [2024-07-15 14:05:41.003312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.413 [2024-07-15 14:05:41.003327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.413 [2024-07-15 14:05:41.003339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.413 [2024-07-15 14:05:41.003367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.413 qpair failed and we were unable to recover it. 00:26:46.413 [2024-07-15 14:05:41.013182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.413 [2024-07-15 14:05:41.013287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.413 [2024-07-15 14:05:41.013310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.413 [2024-07-15 14:05:41.013325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.413 [2024-07-15 14:05:41.013338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.413 [2024-07-15 14:05:41.013366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.413 qpair failed and we were unable to recover it. 00:26:46.413 [2024-07-15 14:05:41.023260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.413 [2024-07-15 14:05:41.023377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.413 [2024-07-15 14:05:41.023402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.413 [2024-07-15 14:05:41.023417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.413 [2024-07-15 14:05:41.023429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.413 [2024-07-15 14:05:41.023457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.413 qpair failed and we were unable to recover it. 00:26:46.413 [2024-07-15 14:05:41.033249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.413 [2024-07-15 14:05:41.033354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.413 [2024-07-15 14:05:41.033379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.413 [2024-07-15 14:05:41.033394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.413 [2024-07-15 14:05:41.033407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.413 [2024-07-15 14:05:41.033434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.413 qpair failed and we were unable to recover it. 00:26:46.413 [2024-07-15 14:05:41.043263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.413 [2024-07-15 14:05:41.043374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.413 [2024-07-15 14:05:41.043399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.413 [2024-07-15 14:05:41.043414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.413 [2024-07-15 14:05:41.043427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.413 [2024-07-15 14:05:41.043455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.413 qpair failed and we were unable to recover it. 00:26:46.413 [2024-07-15 14:05:41.053325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.413 [2024-07-15 14:05:41.053424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.413 [2024-07-15 14:05:41.053450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.413 [2024-07-15 14:05:41.053466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.413 [2024-07-15 14:05:41.053479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.413 [2024-07-15 14:05:41.053508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.413 qpair failed and we were unable to recover it. 00:26:46.413 [2024-07-15 14:05:41.063340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.413 [2024-07-15 14:05:41.063439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.413 [2024-07-15 14:05:41.063463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.413 [2024-07-15 14:05:41.063478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.413 [2024-07-15 14:05:41.063490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.413 [2024-07-15 14:05:41.063518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.413 qpair failed and we were unable to recover it. 00:26:46.413 [2024-07-15 14:05:41.073340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.413 [2024-07-15 14:05:41.073479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.413 [2024-07-15 14:05:41.073510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.413 [2024-07-15 14:05:41.073526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.413 [2024-07-15 14:05:41.073539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.413 [2024-07-15 14:05:41.073567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.413 qpair failed and we were unable to recover it. 00:26:46.413 [2024-07-15 14:05:41.083358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.413 [2024-07-15 14:05:41.083469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.413 [2024-07-15 14:05:41.083494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.413 [2024-07-15 14:05:41.083509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.413 [2024-07-15 14:05:41.083521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.413 [2024-07-15 14:05:41.083550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.413 qpair failed and we were unable to recover it. 00:26:46.413 [2024-07-15 14:05:41.093419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.413 [2024-07-15 14:05:41.093512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.413 [2024-07-15 14:05:41.093537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.413 [2024-07-15 14:05:41.093552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.413 [2024-07-15 14:05:41.093565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.413 [2024-07-15 14:05:41.093593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.413 qpair failed and we were unable to recover it. 00:26:46.413 [2024-07-15 14:05:41.103440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.413 [2024-07-15 14:05:41.103536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.413 [2024-07-15 14:05:41.103562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.413 [2024-07-15 14:05:41.103577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.413 [2024-07-15 14:05:41.103589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.413 [2024-07-15 14:05:41.103617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.413 qpair failed and we were unable to recover it. 00:26:46.413 [2024-07-15 14:05:41.113554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.413 [2024-07-15 14:05:41.113685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.413 [2024-07-15 14:05:41.113712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.413 [2024-07-15 14:05:41.113727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.413 [2024-07-15 14:05:41.113747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.413 [2024-07-15 14:05:41.113786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.413 qpair failed and we were unable to recover it. 00:26:46.413 [2024-07-15 14:05:41.123490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.413 [2024-07-15 14:05:41.123636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.413 [2024-07-15 14:05:41.123662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.413 [2024-07-15 14:05:41.123677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.413 [2024-07-15 14:05:41.123690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.413 [2024-07-15 14:05:41.123718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.413 qpair failed and we were unable to recover it. 00:26:46.413 [2024-07-15 14:05:41.133550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.414 [2024-07-15 14:05:41.133640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.414 [2024-07-15 14:05:41.133666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.414 [2024-07-15 14:05:41.133681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.414 [2024-07-15 14:05:41.133694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.414 [2024-07-15 14:05:41.133723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.414 qpair failed and we were unable to recover it. 00:26:46.414 [2024-07-15 14:05:41.143586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.414 [2024-07-15 14:05:41.143692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.414 [2024-07-15 14:05:41.143717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.414 [2024-07-15 14:05:41.143732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.414 [2024-07-15 14:05:41.143770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.414 [2024-07-15 14:05:41.143800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.414 qpair failed and we were unable to recover it. 00:26:46.414 [2024-07-15 14:05:41.153600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.414 [2024-07-15 14:05:41.153693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.414 [2024-07-15 14:05:41.153719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.414 [2024-07-15 14:05:41.153733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.414 [2024-07-15 14:05:41.153771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.414 [2024-07-15 14:05:41.153801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.414 qpair failed and we were unable to recover it. 00:26:46.414 [2024-07-15 14:05:41.163690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.414 [2024-07-15 14:05:41.163783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.414 [2024-07-15 14:05:41.163815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.414 [2024-07-15 14:05:41.163831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.414 [2024-07-15 14:05:41.163844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.414 [2024-07-15 14:05:41.163873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.414 qpair failed and we were unable to recover it. 00:26:46.414 [2024-07-15 14:05:41.173643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.414 [2024-07-15 14:05:41.173765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.414 [2024-07-15 14:05:41.173792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.414 [2024-07-15 14:05:41.173808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.414 [2024-07-15 14:05:41.173821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.414 [2024-07-15 14:05:41.173850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.414 qpair failed and we were unable to recover it. 00:26:46.414 [2024-07-15 14:05:41.183772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.414 [2024-07-15 14:05:41.183917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.414 [2024-07-15 14:05:41.183942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.414 [2024-07-15 14:05:41.183958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.414 [2024-07-15 14:05:41.183971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.414 [2024-07-15 14:05:41.184000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.414 qpair failed and we were unable to recover it. 00:26:46.414 [2024-07-15 14:05:41.193672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.414 [2024-07-15 14:05:41.193787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.414 [2024-07-15 14:05:41.193811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.414 [2024-07-15 14:05:41.193826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.414 [2024-07-15 14:05:41.193838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.414 [2024-07-15 14:05:41.193866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.414 qpair failed and we were unable to recover it. 00:26:46.414 [2024-07-15 14:05:41.203706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.414 [2024-07-15 14:05:41.203841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.414 [2024-07-15 14:05:41.203867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.414 [2024-07-15 14:05:41.203882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.414 [2024-07-15 14:05:41.203895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.414 [2024-07-15 14:05:41.203929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.414 qpair failed and we were unable to recover it. 00:26:46.414 [2024-07-15 14:05:41.213762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.414 [2024-07-15 14:05:41.213864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.414 [2024-07-15 14:05:41.213890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.414 [2024-07-15 14:05:41.213905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.414 [2024-07-15 14:05:41.213918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.414 [2024-07-15 14:05:41.213946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.414 qpair failed and we were unable to recover it. 00:26:46.414 [2024-07-15 14:05:41.223821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.414 [2024-07-15 14:05:41.223932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.414 [2024-07-15 14:05:41.223957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.414 [2024-07-15 14:05:41.223972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.414 [2024-07-15 14:05:41.223985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.414 [2024-07-15 14:05:41.224013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.414 qpair failed and we were unable to recover it. 00:26:46.414 [2024-07-15 14:05:41.233804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.414 [2024-07-15 14:05:41.233899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.414 [2024-07-15 14:05:41.233925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.414 [2024-07-15 14:05:41.233941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.414 [2024-07-15 14:05:41.233953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.414 [2024-07-15 14:05:41.233981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.414 qpair failed and we were unable to recover it. 00:26:46.414 [2024-07-15 14:05:41.243870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.414 [2024-07-15 14:05:41.244004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.414 [2024-07-15 14:05:41.244029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.414 [2024-07-15 14:05:41.244045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.414 [2024-07-15 14:05:41.244084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.414 [2024-07-15 14:05:41.244112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.414 qpair failed and we were unable to recover it. 00:26:46.675 [2024-07-15 14:05:41.253885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.675 [2024-07-15 14:05:41.253981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.675 [2024-07-15 14:05:41.254012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.675 [2024-07-15 14:05:41.254028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.675 [2024-07-15 14:05:41.254056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.675 [2024-07-15 14:05:41.254085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-07-15 14:05:41.263955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.675 [2024-07-15 14:05:41.264060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.675 [2024-07-15 14:05:41.264084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.675 [2024-07-15 14:05:41.264114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.675 [2024-07-15 14:05:41.264126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.675 [2024-07-15 14:05:41.264155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-07-15 14:05:41.273926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.675 [2024-07-15 14:05:41.274045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.675 [2024-07-15 14:05:41.274070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.675 [2024-07-15 14:05:41.274085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.675 [2024-07-15 14:05:41.274097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.675 [2024-07-15 14:05:41.274125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-07-15 14:05:41.283997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.675 [2024-07-15 14:05:41.284103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.675 [2024-07-15 14:05:41.284128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.675 [2024-07-15 14:05:41.284143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.675 [2024-07-15 14:05:41.284156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.675 [2024-07-15 14:05:41.284183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-07-15 14:05:41.293981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.675 [2024-07-15 14:05:41.294118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.675 [2024-07-15 14:05:41.294142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.675 [2024-07-15 14:05:41.294157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.675 [2024-07-15 14:05:41.294178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.675 [2024-07-15 14:05:41.294232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-07-15 14:05:41.304076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.675 [2024-07-15 14:05:41.304175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.675 [2024-07-15 14:05:41.304203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.675 [2024-07-15 14:05:41.304218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.675 [2024-07-15 14:05:41.304231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.675 [2024-07-15 14:05:41.304260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-07-15 14:05:41.314085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.675 [2024-07-15 14:05:41.314182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.675 [2024-07-15 14:05:41.314208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.675 [2024-07-15 14:05:41.314223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.675 [2024-07-15 14:05:41.314235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.675 [2024-07-15 14:05:41.314263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-07-15 14:05:41.324109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.675 [2024-07-15 14:05:41.324255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.675 [2024-07-15 14:05:41.324281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.675 [2024-07-15 14:05:41.324296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.675 [2024-07-15 14:05:41.324308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.675 [2024-07-15 14:05:41.324347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-07-15 14:05:41.334133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.675 [2024-07-15 14:05:41.334253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.675 [2024-07-15 14:05:41.334278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.675 [2024-07-15 14:05:41.334293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.675 [2024-07-15 14:05:41.334306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.675 [2024-07-15 14:05:41.334333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-07-15 14:05:41.344197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.675 [2024-07-15 14:05:41.344306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.675 [2024-07-15 14:05:41.344332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.675 [2024-07-15 14:05:41.344347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.675 [2024-07-15 14:05:41.344360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.676 [2024-07-15 14:05:41.344399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-07-15 14:05:41.354193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.676 [2024-07-15 14:05:41.354293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.676 [2024-07-15 14:05:41.354317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.676 [2024-07-15 14:05:41.354331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.676 [2024-07-15 14:05:41.354343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.676 [2024-07-15 14:05:41.354371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-07-15 14:05:41.364220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.676 [2024-07-15 14:05:41.364310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.676 [2024-07-15 14:05:41.364336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.676 [2024-07-15 14:05:41.364351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.676 [2024-07-15 14:05:41.364363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.676 [2024-07-15 14:05:41.364391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-07-15 14:05:41.374276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.676 [2024-07-15 14:05:41.374371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.676 [2024-07-15 14:05:41.374395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.676 [2024-07-15 14:05:41.374410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.676 [2024-07-15 14:05:41.374422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.676 [2024-07-15 14:05:41.374451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-07-15 14:05:41.384299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.676 [2024-07-15 14:05:41.384397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.676 [2024-07-15 14:05:41.384423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.676 [2024-07-15 14:05:41.384438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.676 [2024-07-15 14:05:41.384455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.676 [2024-07-15 14:05:41.384484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-07-15 14:05:41.394384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.676 [2024-07-15 14:05:41.394500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.676 [2024-07-15 14:05:41.394531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.676 [2024-07-15 14:05:41.394546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.676 [2024-07-15 14:05:41.394559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.676 [2024-07-15 14:05:41.394586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-07-15 14:05:41.404323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.676 [2024-07-15 14:05:41.404463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.676 [2024-07-15 14:05:41.404488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.676 [2024-07-15 14:05:41.404503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.676 [2024-07-15 14:05:41.404515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.676 [2024-07-15 14:05:41.404543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-07-15 14:05:41.414380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.676 [2024-07-15 14:05:41.414481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.676 [2024-07-15 14:05:41.414507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.676 [2024-07-15 14:05:41.414522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.676 [2024-07-15 14:05:41.414535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.676 [2024-07-15 14:05:41.414562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-07-15 14:05:41.424396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.676 [2024-07-15 14:05:41.424496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.676 [2024-07-15 14:05:41.424520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.676 [2024-07-15 14:05:41.424534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.676 [2024-07-15 14:05:41.424547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.676 [2024-07-15 14:05:41.424574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-07-15 14:05:41.434395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.676 [2024-07-15 14:05:41.434514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.676 [2024-07-15 14:05:41.434539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.676 [2024-07-15 14:05:41.434555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.676 [2024-07-15 14:05:41.434567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.676 [2024-07-15 14:05:41.434595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-07-15 14:05:41.444484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.676 [2024-07-15 14:05:41.444580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.676 [2024-07-15 14:05:41.444604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.676 [2024-07-15 14:05:41.444619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.676 [2024-07-15 14:05:41.444631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.676 [2024-07-15 14:05:41.444659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-07-15 14:05:41.454457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.676 [2024-07-15 14:05:41.454552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.676 [2024-07-15 14:05:41.454576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.676 [2024-07-15 14:05:41.454591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.676 [2024-07-15 14:05:41.454603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.676 [2024-07-15 14:05:41.454631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.677 [2024-07-15 14:05:41.464514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.677 [2024-07-15 14:05:41.464637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.677 [2024-07-15 14:05:41.464660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.677 [2024-07-15 14:05:41.464675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.677 [2024-07-15 14:05:41.464688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.677 [2024-07-15 14:05:41.464715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-07-15 14:05:41.474462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.677 [2024-07-15 14:05:41.474557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.677 [2024-07-15 14:05:41.474581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.677 [2024-07-15 14:05:41.474596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.677 [2024-07-15 14:05:41.474613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.677 [2024-07-15 14:05:41.474642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-07-15 14:05:41.484546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.677 [2024-07-15 14:05:41.484647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.677 [2024-07-15 14:05:41.484671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.677 [2024-07-15 14:05:41.484685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.677 [2024-07-15 14:05:41.484697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.677 [2024-07-15 14:05:41.484726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-07-15 14:05:41.494597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.677 [2024-07-15 14:05:41.494731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.677 [2024-07-15 14:05:41.494777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.677 [2024-07-15 14:05:41.494793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.677 [2024-07-15 14:05:41.494807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.677 [2024-07-15 14:05:41.494837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-07-15 14:05:41.504604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.677 [2024-07-15 14:05:41.504700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.677 [2024-07-15 14:05:41.504724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.677 [2024-07-15 14:05:41.504745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.677 [2024-07-15 14:05:41.504775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.677 [2024-07-15 14:05:41.504805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.938 [2024-07-15 14:05:41.514622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.938 [2024-07-15 14:05:41.514718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.938 [2024-07-15 14:05:41.514764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.938 [2024-07-15 14:05:41.514781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.938 [2024-07-15 14:05:41.514796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.938 [2024-07-15 14:05:41.514825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.938 qpair failed and we were unable to recover it. 00:26:46.938 [2024-07-15 14:05:41.524680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.938 [2024-07-15 14:05:41.524823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.938 [2024-07-15 14:05:41.524848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.938 [2024-07-15 14:05:41.524862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.938 [2024-07-15 14:05:41.524879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.938 [2024-07-15 14:05:41.524909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.938 qpair failed and we were unable to recover it. 00:26:46.938 [2024-07-15 14:05:41.534633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.938 [2024-07-15 14:05:41.534790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.938 [2024-07-15 14:05:41.534814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.938 [2024-07-15 14:05:41.534829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.938 [2024-07-15 14:05:41.534842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.938 [2024-07-15 14:05:41.534871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.938 qpair failed and we were unable to recover it. 00:26:46.938 [2024-07-15 14:05:41.544767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.938 [2024-07-15 14:05:41.544870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.938 [2024-07-15 14:05:41.544895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.938 [2024-07-15 14:05:41.544911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.938 [2024-07-15 14:05:41.544924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.938 [2024-07-15 14:05:41.544953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.938 qpair failed and we were unable to recover it. 00:26:46.938 [2024-07-15 14:05:41.554747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.938 [2024-07-15 14:05:41.554848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.938 [2024-07-15 14:05:41.554876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.938 [2024-07-15 14:05:41.554893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.938 [2024-07-15 14:05:41.554906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.938 [2024-07-15 14:05:41.554944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.938 qpair failed and we were unable to recover it. 00:26:46.938 [2024-07-15 14:05:41.564776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.938 [2024-07-15 14:05:41.564871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.938 [2024-07-15 14:05:41.564897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.938 [2024-07-15 14:05:41.564917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.938 [2024-07-15 14:05:41.564931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.938 [2024-07-15 14:05:41.564960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.938 qpair failed and we were unable to recover it. 00:26:46.938 [2024-07-15 14:05:41.574812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.938 [2024-07-15 14:05:41.574910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.938 [2024-07-15 14:05:41.574935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.938 [2024-07-15 14:05:41.574950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.938 [2024-07-15 14:05:41.574963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.938 [2024-07-15 14:05:41.574992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.938 qpair failed and we were unable to recover it. 00:26:46.938 [2024-07-15 14:05:41.584886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.938 [2024-07-15 14:05:41.584990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.938 [2024-07-15 14:05:41.585015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.938 [2024-07-15 14:05:41.585030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.938 [2024-07-15 14:05:41.585043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.938 [2024-07-15 14:05:41.585087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.938 qpair failed and we were unable to recover it. 00:26:46.938 [2024-07-15 14:05:41.594859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.939 [2024-07-15 14:05:41.594970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.939 [2024-07-15 14:05:41.594995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.939 [2024-07-15 14:05:41.595010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.939 [2024-07-15 14:05:41.595022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.939 [2024-07-15 14:05:41.595065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.939 qpair failed and we were unable to recover it. 00:26:46.939 [2024-07-15 14:05:41.604950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.939 [2024-07-15 14:05:41.605084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.939 [2024-07-15 14:05:41.605108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.939 [2024-07-15 14:05:41.605123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.939 [2024-07-15 14:05:41.605135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.939 [2024-07-15 14:05:41.605163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.939 qpair failed and we were unable to recover it. 00:26:46.939 [2024-07-15 14:05:41.614966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.939 [2024-07-15 14:05:41.615098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.939 [2024-07-15 14:05:41.615122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.939 [2024-07-15 14:05:41.615137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.939 [2024-07-15 14:05:41.615150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.939 [2024-07-15 14:05:41.615178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.939 qpair failed and we were unable to recover it. 00:26:46.939 [2024-07-15 14:05:41.624976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.939 [2024-07-15 14:05:41.625125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.939 [2024-07-15 14:05:41.625149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.939 [2024-07-15 14:05:41.625163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.939 [2024-07-15 14:05:41.625176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.939 [2024-07-15 14:05:41.625204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.939 qpair failed and we were unable to recover it. 00:26:46.939 [2024-07-15 14:05:41.634942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.939 [2024-07-15 14:05:41.635040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.939 [2024-07-15 14:05:41.635080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.939 [2024-07-15 14:05:41.635095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.939 [2024-07-15 14:05:41.635109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.939 [2024-07-15 14:05:41.635138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.939 qpair failed and we were unable to recover it. 00:26:46.939 [2024-07-15 14:05:41.645059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.939 [2024-07-15 14:05:41.645161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.939 [2024-07-15 14:05:41.645185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.939 [2024-07-15 14:05:41.645199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.939 [2024-07-15 14:05:41.645212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.939 [2024-07-15 14:05:41.645240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.939 qpair failed and we were unable to recover it. 00:26:46.939 [2024-07-15 14:05:41.655046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.939 [2024-07-15 14:05:41.655144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.939 [2024-07-15 14:05:41.655168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.939 [2024-07-15 14:05:41.655187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.939 [2024-07-15 14:05:41.655201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.939 [2024-07-15 14:05:41.655229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.939 qpair failed and we were unable to recover it. 00:26:46.939 [2024-07-15 14:05:41.665131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.939 [2024-07-15 14:05:41.665254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.939 [2024-07-15 14:05:41.665278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.939 [2024-07-15 14:05:41.665294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.939 [2024-07-15 14:05:41.665307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.939 [2024-07-15 14:05:41.665335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.939 qpair failed and we were unable to recover it. 00:26:46.939 [2024-07-15 14:05:41.675097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.939 [2024-07-15 14:05:41.675193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.939 [2024-07-15 14:05:41.675216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.939 [2024-07-15 14:05:41.675231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.939 [2024-07-15 14:05:41.675243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.939 [2024-07-15 14:05:41.675272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.939 qpair failed and we were unable to recover it. 00:26:46.939 [2024-07-15 14:05:41.685102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.939 [2024-07-15 14:05:41.685206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.939 [2024-07-15 14:05:41.685230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.939 [2024-07-15 14:05:41.685244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.939 [2024-07-15 14:05:41.685257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.939 [2024-07-15 14:05:41.685285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.939 qpair failed and we were unable to recover it. 00:26:46.939 [2024-07-15 14:05:41.695166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.939 [2024-07-15 14:05:41.695284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.939 [2024-07-15 14:05:41.695308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.939 [2024-07-15 14:05:41.695323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.939 [2024-07-15 14:05:41.695336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.939 [2024-07-15 14:05:41.695364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.939 qpair failed and we were unable to recover it. 00:26:46.939 [2024-07-15 14:05:41.705272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.939 [2024-07-15 14:05:41.705372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.939 [2024-07-15 14:05:41.705395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.939 [2024-07-15 14:05:41.705409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.939 [2024-07-15 14:05:41.705422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.939 [2024-07-15 14:05:41.705450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.939 qpair failed and we were unable to recover it. 00:26:46.939 [2024-07-15 14:05:41.715202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.939 [2024-07-15 14:05:41.715299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.939 [2024-07-15 14:05:41.715324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.939 [2024-07-15 14:05:41.715339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.939 [2024-07-15 14:05:41.715351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.939 [2024-07-15 14:05:41.715380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.939 qpair failed and we were unable to recover it. 00:26:46.939 [2024-07-15 14:05:41.725242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.939 [2024-07-15 14:05:41.725377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.939 [2024-07-15 14:05:41.725402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.939 [2024-07-15 14:05:41.725417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.939 [2024-07-15 14:05:41.725430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.939 [2024-07-15 14:05:41.725458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.939 qpair failed and we were unable to recover it. 00:26:46.939 [2024-07-15 14:05:41.735309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.939 [2024-07-15 14:05:41.735447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.939 [2024-07-15 14:05:41.735471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.939 [2024-07-15 14:05:41.735486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.939 [2024-07-15 14:05:41.735499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.939 [2024-07-15 14:05:41.735526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.939 qpair failed and we were unable to recover it. 00:26:46.939 [2024-07-15 14:05:41.745305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.939 [2024-07-15 14:05:41.745404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.939 [2024-07-15 14:05:41.745429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.939 [2024-07-15 14:05:41.745448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.939 [2024-07-15 14:05:41.745461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.939 [2024-07-15 14:05:41.745490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.939 qpair failed and we were unable to recover it. 00:26:46.939 [2024-07-15 14:05:41.755323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.939 [2024-07-15 14:05:41.755434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.939 [2024-07-15 14:05:41.755458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.939 [2024-07-15 14:05:41.755473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.939 [2024-07-15 14:05:41.755486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.939 [2024-07-15 14:05:41.755514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.939 qpair failed and we were unable to recover it. 00:26:46.939 [2024-07-15 14:05:41.765402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.939 [2024-07-15 14:05:41.765535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.939 [2024-07-15 14:05:41.765560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.939 [2024-07-15 14:05:41.765575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.939 [2024-07-15 14:05:41.765587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.939 [2024-07-15 14:05:41.765616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.939 qpair failed and we were unable to recover it. 00:26:46.939 [2024-07-15 14:05:41.775371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.939 [2024-07-15 14:05:41.775463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.939 [2024-07-15 14:05:41.775487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.939 [2024-07-15 14:05:41.775502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.939 [2024-07-15 14:05:41.775515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:46.939 [2024-07-15 14:05:41.775543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:46.939 qpair failed and we were unable to recover it. 00:26:47.200 [2024-07-15 14:05:41.785496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.200 [2024-07-15 14:05:41.785608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.200 [2024-07-15 14:05:41.785633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.200 [2024-07-15 14:05:41.785648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.200 [2024-07-15 14:05:41.785661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.200 [2024-07-15 14:05:41.785704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.201 [2024-07-15 14:05:41.795447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.201 [2024-07-15 14:05:41.795551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.201 [2024-07-15 14:05:41.795576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.201 [2024-07-15 14:05:41.795607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.201 [2024-07-15 14:05:41.795620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.201 [2024-07-15 14:05:41.795650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-07-15 14:05:41.805434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.201 [2024-07-15 14:05:41.805530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.201 [2024-07-15 14:05:41.805557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.201 [2024-07-15 14:05:41.805572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.201 [2024-07-15 14:05:41.805585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.201 [2024-07-15 14:05:41.805614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-07-15 14:05:41.815534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.201 [2024-07-15 14:05:41.815630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.201 [2024-07-15 14:05:41.815654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.201 [2024-07-15 14:05:41.815669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.201 [2024-07-15 14:05:41.815682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.201 [2024-07-15 14:05:41.815711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-07-15 14:05:41.825520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.201 [2024-07-15 14:05:41.825620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.201 [2024-07-15 14:05:41.825644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.201 [2024-07-15 14:05:41.825659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.201 [2024-07-15 14:05:41.825671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.201 [2024-07-15 14:05:41.825700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-07-15 14:05:41.835582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.201 [2024-07-15 14:05:41.835716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.201 [2024-07-15 14:05:41.835769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.201 [2024-07-15 14:05:41.835788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.201 [2024-07-15 14:05:41.835802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.201 [2024-07-15 14:05:41.835831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-07-15 14:05:41.845537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.201 [2024-07-15 14:05:41.845671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.201 [2024-07-15 14:05:41.845694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.201 [2024-07-15 14:05:41.845709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.201 [2024-07-15 14:05:41.845722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.201 [2024-07-15 14:05:41.845774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-07-15 14:05:41.855588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.201 [2024-07-15 14:05:41.855687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.201 [2024-07-15 14:05:41.855711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.201 [2024-07-15 14:05:41.855752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.201 [2024-07-15 14:05:41.855768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.201 [2024-07-15 14:05:41.855797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-07-15 14:05:41.865667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.201 [2024-07-15 14:05:41.865809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.201 [2024-07-15 14:05:41.865834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.201 [2024-07-15 14:05:41.865849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.201 [2024-07-15 14:05:41.865861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.201 [2024-07-15 14:05:41.865891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-07-15 14:05:41.875709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.201 [2024-07-15 14:05:41.875824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.201 [2024-07-15 14:05:41.875849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.201 [2024-07-15 14:05:41.875864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.201 [2024-07-15 14:05:41.875877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.201 [2024-07-15 14:05:41.875913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-07-15 14:05:41.885695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.201 [2024-07-15 14:05:41.885871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.201 [2024-07-15 14:05:41.885896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.201 [2024-07-15 14:05:41.885911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.201 [2024-07-15 14:05:41.885924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.201 [2024-07-15 14:05:41.885954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-07-15 14:05:41.895698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.201 [2024-07-15 14:05:41.895835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.201 [2024-07-15 14:05:41.895861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.201 [2024-07-15 14:05:41.895877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.201 [2024-07-15 14:05:41.895890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.201 [2024-07-15 14:05:41.895919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-07-15 14:05:41.905733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.201 [2024-07-15 14:05:41.905890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.201 [2024-07-15 14:05:41.905917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.201 [2024-07-15 14:05:41.905932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.201 [2024-07-15 14:05:41.905945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.201 [2024-07-15 14:05:41.905974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-07-15 14:05:41.915786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.201 [2024-07-15 14:05:41.915886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.201 [2024-07-15 14:05:41.915911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.201 [2024-07-15 14:05:41.915925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.201 [2024-07-15 14:05:41.915939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.201 [2024-07-15 14:05:41.915967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-07-15 14:05:41.925826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.201 [2024-07-15 14:05:41.925923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.201 [2024-07-15 14:05:41.925952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.202 [2024-07-15 14:05:41.925968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.202 [2024-07-15 14:05:41.925981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.202 [2024-07-15 14:05:41.926009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-07-15 14:05:41.935840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.202 [2024-07-15 14:05:41.935980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.202 [2024-07-15 14:05:41.936006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.202 [2024-07-15 14:05:41.936036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.202 [2024-07-15 14:05:41.936050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.202 [2024-07-15 14:05:41.936079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-07-15 14:05:41.945857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.202 [2024-07-15 14:05:41.945966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.202 [2024-07-15 14:05:41.945990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.202 [2024-07-15 14:05:41.946005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.202 [2024-07-15 14:05:41.946018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.202 [2024-07-15 14:05:41.946062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-07-15 14:05:41.955840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.202 [2024-07-15 14:05:41.955939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.202 [2024-07-15 14:05:41.955964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.202 [2024-07-15 14:05:41.955979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.202 [2024-07-15 14:05:41.955992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.202 [2024-07-15 14:05:41.956020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-07-15 14:05:41.965881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.202 [2024-07-15 14:05:41.965978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.202 [2024-07-15 14:05:41.966002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.202 [2024-07-15 14:05:41.966017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.202 [2024-07-15 14:05:41.966030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.202 [2024-07-15 14:05:41.966079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-07-15 14:05:41.975951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.202 [2024-07-15 14:05:41.976066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.202 [2024-07-15 14:05:41.976105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.202 [2024-07-15 14:05:41.976121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.202 [2024-07-15 14:05:41.976134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.202 [2024-07-15 14:05:41.976164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-07-15 14:05:41.985982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.202 [2024-07-15 14:05:41.986133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.202 [2024-07-15 14:05:41.986157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.202 [2024-07-15 14:05:41.986172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.202 [2024-07-15 14:05:41.986184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.202 [2024-07-15 14:05:41.986213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-07-15 14:05:41.996091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.202 [2024-07-15 14:05:41.996193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.202 [2024-07-15 14:05:41.996217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.202 [2024-07-15 14:05:41.996232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.202 [2024-07-15 14:05:41.996244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.202 [2024-07-15 14:05:41.996272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-07-15 14:05:42.006048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.202 [2024-07-15 14:05:42.006157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.202 [2024-07-15 14:05:42.006181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.202 [2024-07-15 14:05:42.006195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.202 [2024-07-15 14:05:42.006208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.202 [2024-07-15 14:05:42.006236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-07-15 14:05:42.016076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.202 [2024-07-15 14:05:42.016171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.202 [2024-07-15 14:05:42.016200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.202 [2024-07-15 14:05:42.016216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.202 [2024-07-15 14:05:42.016229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.202 [2024-07-15 14:05:42.016257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-07-15 14:05:42.026106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.202 [2024-07-15 14:05:42.026205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.202 [2024-07-15 14:05:42.026229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.202 [2024-07-15 14:05:42.026244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.202 [2024-07-15 14:05:42.026257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.202 [2024-07-15 14:05:42.026285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-07-15 14:05:42.036115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.202 [2024-07-15 14:05:42.036251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.202 [2024-07-15 14:05:42.036276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.202 [2024-07-15 14:05:42.036292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.202 [2024-07-15 14:05:42.036304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.202 [2024-07-15 14:05:42.036333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.462 [2024-07-15 14:05:42.046150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.462 [2024-07-15 14:05:42.046265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.462 [2024-07-15 14:05:42.046290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.462 [2024-07-15 14:05:42.046305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.462 [2024-07-15 14:05:42.046318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.462 [2024-07-15 14:05:42.046348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.462 qpair failed and we were unable to recover it. 00:26:47.462 [2024-07-15 14:05:42.056232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.462 [2024-07-15 14:05:42.056332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.462 [2024-07-15 14:05:42.056359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.462 [2024-07-15 14:05:42.056375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.462 [2024-07-15 14:05:42.056388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.462 [2024-07-15 14:05:42.056422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.462 qpair failed and we were unable to recover it. 00:26:47.462 [2024-07-15 14:05:42.066209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.462 [2024-07-15 14:05:42.066353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.462 [2024-07-15 14:05:42.066379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.462 [2024-07-15 14:05:42.066395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.462 [2024-07-15 14:05:42.066407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.462 [2024-07-15 14:05:42.066436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.462 qpair failed and we were unable to recover it. 00:26:47.462 [2024-07-15 14:05:42.076227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.462 [2024-07-15 14:05:42.076329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.462 [2024-07-15 14:05:42.076354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.462 [2024-07-15 14:05:42.076369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.462 [2024-07-15 14:05:42.076381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.462 [2024-07-15 14:05:42.076408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.462 qpair failed and we were unable to recover it. 00:26:47.462 [2024-07-15 14:05:42.086250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.462 [2024-07-15 14:05:42.086351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.462 [2024-07-15 14:05:42.086374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.462 [2024-07-15 14:05:42.086389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.462 [2024-07-15 14:05:42.086401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.462 [2024-07-15 14:05:42.086428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.462 qpair failed and we were unable to recover it. 00:26:47.462 [2024-07-15 14:05:42.096302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.463 [2024-07-15 14:05:42.096395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.463 [2024-07-15 14:05:42.096419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.463 [2024-07-15 14:05:42.096433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.463 [2024-07-15 14:05:42.096446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.463 [2024-07-15 14:05:42.096474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.463 qpair failed and we were unable to recover it. 00:26:47.463 [2024-07-15 14:05:42.106367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.463 [2024-07-15 14:05:42.106492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.463 [2024-07-15 14:05:42.106522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.463 [2024-07-15 14:05:42.106537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.463 [2024-07-15 14:05:42.106550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.463 [2024-07-15 14:05:42.106578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.463 qpair failed and we were unable to recover it. 00:26:47.463 [2024-07-15 14:05:42.116347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.463 [2024-07-15 14:05:42.116446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.463 [2024-07-15 14:05:42.116470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.463 [2024-07-15 14:05:42.116485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.463 [2024-07-15 14:05:42.116498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.463 [2024-07-15 14:05:42.116525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.463 qpair failed and we were unable to recover it. 00:26:47.463 [2024-07-15 14:05:42.126379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.463 [2024-07-15 14:05:42.126514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.463 [2024-07-15 14:05:42.126539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.463 [2024-07-15 14:05:42.126554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.463 [2024-07-15 14:05:42.126567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.463 [2024-07-15 14:05:42.126594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.463 qpair failed and we were unable to recover it. 00:26:47.463 [2024-07-15 14:05:42.136404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.463 [2024-07-15 14:05:42.136533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.463 [2024-07-15 14:05:42.136557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.463 [2024-07-15 14:05:42.136572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.463 [2024-07-15 14:05:42.136585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.463 [2024-07-15 14:05:42.136612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.463 qpair failed and we were unable to recover it. 00:26:47.463 [2024-07-15 14:05:42.146404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.463 [2024-07-15 14:05:42.146512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.463 [2024-07-15 14:05:42.146536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.463 [2024-07-15 14:05:42.146551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.463 [2024-07-15 14:05:42.146571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.463 [2024-07-15 14:05:42.146601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.463 qpair failed and we were unable to recover it. 00:26:47.463 [2024-07-15 14:05:42.156417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.463 [2024-07-15 14:05:42.156523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.463 [2024-07-15 14:05:42.156547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.463 [2024-07-15 14:05:42.156562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.463 [2024-07-15 14:05:42.156575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.463 [2024-07-15 14:05:42.156602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.463 qpair failed and we were unable to recover it. 00:26:47.463 [2024-07-15 14:05:42.166545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.463 [2024-07-15 14:05:42.166646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.463 [2024-07-15 14:05:42.166670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.463 [2024-07-15 14:05:42.166684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.463 [2024-07-15 14:05:42.166698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.463 [2024-07-15 14:05:42.166725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.463 qpair failed and we were unable to recover it. 00:26:47.463 [2024-07-15 14:05:42.176472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.463 [2024-07-15 14:05:42.176560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.463 [2024-07-15 14:05:42.176583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.463 [2024-07-15 14:05:42.176597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.463 [2024-07-15 14:05:42.176610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.463 [2024-07-15 14:05:42.176637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.463 qpair failed and we were unable to recover it. 00:26:47.463 [2024-07-15 14:05:42.186577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.463 [2024-07-15 14:05:42.186687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.463 [2024-07-15 14:05:42.186711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.463 [2024-07-15 14:05:42.186726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.463 [2024-07-15 14:05:42.186746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.463 [2024-07-15 14:05:42.186777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.463 qpair failed and we were unable to recover it. 00:26:47.463 [2024-07-15 14:05:42.196573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.463 [2024-07-15 14:05:42.196681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.463 [2024-07-15 14:05:42.196705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.463 [2024-07-15 14:05:42.196719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.463 [2024-07-15 14:05:42.196732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.463 [2024-07-15 14:05:42.196785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.463 qpair failed and we were unable to recover it. 00:26:47.463 [2024-07-15 14:05:42.206553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.463 [2024-07-15 14:05:42.206654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.463 [2024-07-15 14:05:42.206677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.463 [2024-07-15 14:05:42.206691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.463 [2024-07-15 14:05:42.206704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.463 [2024-07-15 14:05:42.206731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.463 qpair failed and we were unable to recover it. 00:26:47.463 [2024-07-15 14:05:42.216646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.463 [2024-07-15 14:05:42.216761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.463 [2024-07-15 14:05:42.216786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.463 [2024-07-15 14:05:42.216801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.463 [2024-07-15 14:05:42.216814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.463 [2024-07-15 14:05:42.216844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.463 qpair failed and we were unable to recover it. 00:26:47.463 [2024-07-15 14:05:42.226642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.463 [2024-07-15 14:05:42.226808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.463 [2024-07-15 14:05:42.226835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.463 [2024-07-15 14:05:42.226850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.463 [2024-07-15 14:05:42.226863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.463 [2024-07-15 14:05:42.226893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.463 qpair failed and we were unable to recover it. 00:26:47.463 [2024-07-15 14:05:42.236783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.464 [2024-07-15 14:05:42.236884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.464 [2024-07-15 14:05:42.236910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.464 [2024-07-15 14:05:42.236926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.464 [2024-07-15 14:05:42.236945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.464 [2024-07-15 14:05:42.236975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.464 qpair failed and we were unable to recover it. 00:26:47.464 [2024-07-15 14:05:42.246675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.464 [2024-07-15 14:05:42.246790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.464 [2024-07-15 14:05:42.246815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.464 [2024-07-15 14:05:42.246829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.464 [2024-07-15 14:05:42.246842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.464 [2024-07-15 14:05:42.246872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.464 qpair failed and we were unable to recover it. 00:26:47.464 [2024-07-15 14:05:42.256695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.464 [2024-07-15 14:05:42.256846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.464 [2024-07-15 14:05:42.256873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.464 [2024-07-15 14:05:42.256888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.464 [2024-07-15 14:05:42.256901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.464 [2024-07-15 14:05:42.256931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.464 qpair failed and we were unable to recover it. 00:26:47.464 [2024-07-15 14:05:42.266769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.464 [2024-07-15 14:05:42.266872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.464 [2024-07-15 14:05:42.266899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.464 [2024-07-15 14:05:42.266914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.464 [2024-07-15 14:05:42.266927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.464 [2024-07-15 14:05:42.266956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.464 qpair failed and we were unable to recover it. 00:26:47.464 [2024-07-15 14:05:42.276789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.464 [2024-07-15 14:05:42.276895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.464 [2024-07-15 14:05:42.276920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.464 [2024-07-15 14:05:42.276935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.464 [2024-07-15 14:05:42.276948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.464 [2024-07-15 14:05:42.276976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.464 qpair failed and we were unable to recover it. 00:26:47.464 [2024-07-15 14:05:42.286850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.464 [2024-07-15 14:05:42.286988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.464 [2024-07-15 14:05:42.287026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.464 [2024-07-15 14:05:42.287041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.464 [2024-07-15 14:05:42.287054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.464 [2024-07-15 14:05:42.287082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.464 qpair failed and we were unable to recover it. 00:26:47.464 [2024-07-15 14:05:42.296901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.464 [2024-07-15 14:05:42.297000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.464 [2024-07-15 14:05:42.297024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.464 [2024-07-15 14:05:42.297040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.464 [2024-07-15 14:05:42.297053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.464 [2024-07-15 14:05:42.297095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.464 qpair failed and we were unable to recover it. 00:26:47.723 [2024-07-15 14:05:42.307002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.723 [2024-07-15 14:05:42.307126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.723 [2024-07-15 14:05:42.307154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.723 [2024-07-15 14:05:42.307169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.723 [2024-07-15 14:05:42.307182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.723 [2024-07-15 14:05:42.307212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.723 qpair failed and we were unable to recover it. 00:26:47.723 [2024-07-15 14:05:42.317044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.723 [2024-07-15 14:05:42.317171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.723 [2024-07-15 14:05:42.317197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.723 [2024-07-15 14:05:42.317213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.723 [2024-07-15 14:05:42.317225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.723 [2024-07-15 14:05:42.317253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.723 qpair failed and we were unable to recover it. 00:26:47.723 [2024-07-15 14:05:42.327035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.723 [2024-07-15 14:05:42.327130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.723 [2024-07-15 14:05:42.327155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.723 [2024-07-15 14:05:42.327174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.723 [2024-07-15 14:05:42.327188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.723 [2024-07-15 14:05:42.327216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.723 qpair failed and we were unable to recover it. 00:26:47.723 [2024-07-15 14:05:42.337138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.723 [2024-07-15 14:05:42.337265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.723 [2024-07-15 14:05:42.337291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.723 [2024-07-15 14:05:42.337306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.723 [2024-07-15 14:05:42.337318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.723 [2024-07-15 14:05:42.337346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.723 qpair failed and we were unable to recover it. 00:26:47.723 [2024-07-15 14:05:42.347151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.723 [2024-07-15 14:05:42.347296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.723 [2024-07-15 14:05:42.347321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.723 [2024-07-15 14:05:42.347335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.723 [2024-07-15 14:05:42.347348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.723 [2024-07-15 14:05:42.347376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.723 qpair failed and we were unable to recover it. 00:26:47.723 [2024-07-15 14:05:42.357072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.723 [2024-07-15 14:05:42.357186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.723 [2024-07-15 14:05:42.357212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.723 [2024-07-15 14:05:42.357228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.723 [2024-07-15 14:05:42.357240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.723 [2024-07-15 14:05:42.357268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.723 qpair failed and we were unable to recover it. 00:26:47.723 [2024-07-15 14:05:42.367086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.723 [2024-07-15 14:05:42.367185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.723 [2024-07-15 14:05:42.367211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.723 [2024-07-15 14:05:42.367226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.723 [2024-07-15 14:05:42.367239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:47.723 [2024-07-15 14:05:42.367277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:47.723 qpair failed and we were unable to recover it. 00:26:47.723 Write completed with error (sct=0, sc=8) 00:26:47.723 starting I/O failed 00:26:47.723 Write completed with error (sct=0, sc=8) 00:26:47.723 starting I/O failed 00:26:47.723 Read completed with error (sct=0, sc=8) 00:26:47.723 starting I/O failed 00:26:47.723 Write completed with error (sct=0, sc=8) 00:26:47.723 starting I/O failed 00:26:47.723 Write completed with error (sct=0, sc=8) 00:26:47.723 starting I/O failed 00:26:47.723 Read completed with error (sct=0, sc=8) 00:26:47.723 starting I/O failed 00:26:47.723 Write completed with error (sct=0, sc=8) 00:26:47.723 starting I/O failed 00:26:47.723 Read completed with error (sct=0, sc=8) 00:26:47.723 starting I/O failed 00:26:47.723 Read completed with error (sct=0, sc=8) 00:26:47.723 starting I/O failed 00:26:47.723 Write completed with error (sct=0, sc=8) 00:26:47.723 starting I/O failed 00:26:47.723 Write completed with error (sct=0, sc=8) 00:26:47.723 starting I/O failed 00:26:47.723 Read completed with error (sct=0, sc=8) 00:26:47.723 starting I/O failed 00:26:47.723 Read completed with error (sct=0, sc=8) 00:26:47.723 starting I/O failed 00:26:47.723 Read completed with error (sct=0, sc=8) 00:26:47.723 starting I/O failed 00:26:47.723 Read completed with error (sct=0, sc=8) 00:26:47.723 starting I/O failed 00:26:47.723 Read completed with error (sct=0, sc=8) 00:26:47.723 starting I/O failed 00:26:47.723 Read completed with error (sct=0, sc=8) 00:26:47.723 starting I/O failed 00:26:47.723 Write completed with error (sct=0, sc=8) 00:26:47.723 starting I/O failed 00:26:47.723 Write completed with error (sct=0, sc=8) 00:26:47.723 starting I/O failed 00:26:47.723 Read completed with error (sct=0, sc=8) 00:26:47.723 starting I/O failed 00:26:47.723 Write completed with error (sct=0, sc=8) 00:26:47.723 starting I/O failed 00:26:47.723 Write completed with error (sct=0, sc=8) 00:26:47.723 starting I/O failed 00:26:47.723 Write completed with error (sct=0, sc=8) 00:26:47.723 starting I/O failed 00:26:47.723 Read completed with error (sct=0, sc=8) 00:26:47.723 starting I/O failed 00:26:47.723 Write completed with error (sct=0, sc=8) 00:26:47.723 starting I/O failed 00:26:47.723 Read completed with error (sct=0, sc=8) 00:26:47.723 starting I/O failed 00:26:47.723 Write completed with error (sct=0, sc=8) 00:26:47.723 starting I/O failed 00:26:47.724 Read completed with error (sct=0, sc=8) 00:26:47.724 starting I/O failed 00:26:47.724 Write completed with error (sct=0, sc=8) 00:26:47.724 starting I/O failed 00:26:47.724 Write completed with error (sct=0, sc=8) 00:26:47.724 starting I/O failed 00:26:47.724 Read completed with error (sct=0, sc=8) 00:26:47.724 starting I/O failed 00:26:47.724 Write completed with error (sct=0, sc=8) 00:26:47.724 starting I/O failed 00:26:47.724 [2024-07-15 14:05:42.367631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:47.724 [2024-07-15 14:05:42.367800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f6ae0 is same with the state(5) to be set 00:26:47.724 [2024-07-15 14:05:42.377128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.724 [2024-07-15 14:05:42.377223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.724 [2024-07-15 14:05:42.377255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.724 [2024-07-15 14:05:42.377271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.724 [2024-07-15 14:05:42.377284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.724 [2024-07-15 14:05:42.377316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.724 qpair failed and we were unable to recover it. 00:26:47.724 [2024-07-15 14:05:42.387258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.724 [2024-07-15 14:05:42.387378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.724 [2024-07-15 14:05:42.387405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.724 [2024-07-15 14:05:42.387420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.724 [2024-07-15 14:05:42.387433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.724 [2024-07-15 14:05:42.387462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.724 qpair failed and we were unable to recover it. 00:26:47.724 [2024-07-15 14:05:42.397182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.724 [2024-07-15 14:05:42.397284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.724 [2024-07-15 14:05:42.397309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.724 [2024-07-15 14:05:42.397324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.724 [2024-07-15 14:05:42.397336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.724 [2024-07-15 14:05:42.397365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.724 qpair failed and we were unable to recover it. 00:26:47.724 [2024-07-15 14:05:42.407231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.724 [2024-07-15 14:05:42.407327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.724 [2024-07-15 14:05:42.407352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.724 [2024-07-15 14:05:42.407367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.724 [2024-07-15 14:05:42.407379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.724 [2024-07-15 14:05:42.407410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.724 qpair failed and we were unable to recover it. 00:26:47.724 [2024-07-15 14:05:42.417301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.724 [2024-07-15 14:05:42.417399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.724 [2024-07-15 14:05:42.417425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.724 [2024-07-15 14:05:42.417440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.724 [2024-07-15 14:05:42.417453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.724 [2024-07-15 14:05:42.417483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.724 qpair failed and we were unable to recover it. 00:26:47.724 [2024-07-15 14:05:42.427304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.724 [2024-07-15 14:05:42.427459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.724 [2024-07-15 14:05:42.427485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.724 [2024-07-15 14:05:42.427500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.724 [2024-07-15 14:05:42.427512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.724 [2024-07-15 14:05:42.427542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.724 qpair failed and we were unable to recover it. 00:26:47.724 [2024-07-15 14:05:42.437283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.724 [2024-07-15 14:05:42.437386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.724 [2024-07-15 14:05:42.437412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.724 [2024-07-15 14:05:42.437433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.724 [2024-07-15 14:05:42.437447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.724 [2024-07-15 14:05:42.437476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.724 qpair failed and we were unable to recover it. 00:26:47.724 [2024-07-15 14:05:42.447322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.724 [2024-07-15 14:05:42.447423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.724 [2024-07-15 14:05:42.447447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.724 [2024-07-15 14:05:42.447462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.724 [2024-07-15 14:05:42.447474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.724 [2024-07-15 14:05:42.447503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.724 qpair failed and we were unable to recover it. 00:26:47.724 [2024-07-15 14:05:42.457336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.724 [2024-07-15 14:05:42.457429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.724 [2024-07-15 14:05:42.457455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.724 [2024-07-15 14:05:42.457471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.724 [2024-07-15 14:05:42.457483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.724 [2024-07-15 14:05:42.457512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.724 qpair failed and we were unable to recover it. 00:26:47.724 [2024-07-15 14:05:42.467377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.724 [2024-07-15 14:05:42.467520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.724 [2024-07-15 14:05:42.467546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.724 [2024-07-15 14:05:42.467561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.724 [2024-07-15 14:05:42.467573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.724 [2024-07-15 14:05:42.467601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.724 qpair failed and we were unable to recover it. 00:26:47.724 [2024-07-15 14:05:42.477400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.724 [2024-07-15 14:05:42.477498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.724 [2024-07-15 14:05:42.477523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.724 [2024-07-15 14:05:42.477537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.724 [2024-07-15 14:05:42.477550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.724 [2024-07-15 14:05:42.477578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.724 qpair failed and we were unable to recover it. 00:26:47.724 [2024-07-15 14:05:42.487439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.724 [2024-07-15 14:05:42.487531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.724 [2024-07-15 14:05:42.487555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.724 [2024-07-15 14:05:42.487570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.724 [2024-07-15 14:05:42.487582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.724 [2024-07-15 14:05:42.487611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.724 qpair failed and we were unable to recover it. 00:26:47.724 [2024-07-15 14:05:42.497461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.724 [2024-07-15 14:05:42.497552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.724 [2024-07-15 14:05:42.497578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.724 [2024-07-15 14:05:42.497593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.724 [2024-07-15 14:05:42.497605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.724 [2024-07-15 14:05:42.497634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.724 qpair failed and we were unable to recover it. 00:26:47.724 [2024-07-15 14:05:42.507497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.724 [2024-07-15 14:05:42.507602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.725 [2024-07-15 14:05:42.507627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.725 [2024-07-15 14:05:42.507642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.725 [2024-07-15 14:05:42.507655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.725 [2024-07-15 14:05:42.507684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.725 qpair failed and we were unable to recover it. 00:26:47.725 [2024-07-15 14:05:42.517537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.725 [2024-07-15 14:05:42.517658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.725 [2024-07-15 14:05:42.517684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.725 [2024-07-15 14:05:42.517699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.725 [2024-07-15 14:05:42.517711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.725 [2024-07-15 14:05:42.517763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.725 qpair failed and we were unable to recover it. 00:26:47.725 [2024-07-15 14:05:42.527531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.725 [2024-07-15 14:05:42.527631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.725 [2024-07-15 14:05:42.527661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.725 [2024-07-15 14:05:42.527676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.725 [2024-07-15 14:05:42.527689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.725 [2024-07-15 14:05:42.527731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.725 qpair failed and we were unable to recover it. 00:26:47.725 [2024-07-15 14:05:42.537597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.725 [2024-07-15 14:05:42.537693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.725 [2024-07-15 14:05:42.537717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.725 [2024-07-15 14:05:42.537757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.725 [2024-07-15 14:05:42.537771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.725 [2024-07-15 14:05:42.537801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.725 qpair failed and we were unable to recover it. 00:26:47.725 [2024-07-15 14:05:42.547628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.725 [2024-07-15 14:05:42.547780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.725 [2024-07-15 14:05:42.547804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.725 [2024-07-15 14:05:42.547818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.725 [2024-07-15 14:05:42.547831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.725 [2024-07-15 14:05:42.547861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.725 qpair failed and we were unable to recover it. 00:26:47.725 [2024-07-15 14:05:42.557695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.725 [2024-07-15 14:05:42.557825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.725 [2024-07-15 14:05:42.557852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.725 [2024-07-15 14:05:42.557867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.725 [2024-07-15 14:05:42.557880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.725 [2024-07-15 14:05:42.557910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.725 qpair failed and we were unable to recover it. 00:26:47.984 [2024-07-15 14:05:42.567667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.984 [2024-07-15 14:05:42.567781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.984 [2024-07-15 14:05:42.567806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.984 [2024-07-15 14:05:42.567822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.984 [2024-07-15 14:05:42.567834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.984 [2024-07-15 14:05:42.567870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.984 qpair failed and we were unable to recover it. 00:26:47.984 [2024-07-15 14:05:42.577666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.984 [2024-07-15 14:05:42.577782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.984 [2024-07-15 14:05:42.577808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.984 [2024-07-15 14:05:42.577823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.984 [2024-07-15 14:05:42.577836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.984 [2024-07-15 14:05:42.577867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.984 qpair failed and we were unable to recover it. 00:26:47.984 [2024-07-15 14:05:42.587784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.984 [2024-07-15 14:05:42.587889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.984 [2024-07-15 14:05:42.587916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.984 [2024-07-15 14:05:42.587931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.984 [2024-07-15 14:05:42.587944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.984 [2024-07-15 14:05:42.587975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.984 qpair failed and we were unable to recover it. 00:26:47.984 [2024-07-15 14:05:42.597758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.984 [2024-07-15 14:05:42.597907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.984 [2024-07-15 14:05:42.597934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.984 [2024-07-15 14:05:42.597950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.984 [2024-07-15 14:05:42.597962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.984 [2024-07-15 14:05:42.597993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.984 qpair failed and we were unable to recover it. 00:26:47.984 [2024-07-15 14:05:42.607806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.984 [2024-07-15 14:05:42.607915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.984 [2024-07-15 14:05:42.607943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.984 [2024-07-15 14:05:42.607959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.984 [2024-07-15 14:05:42.607971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.984 [2024-07-15 14:05:42.608001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.984 qpair failed and we were unable to recover it. 00:26:47.984 [2024-07-15 14:05:42.617841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.984 [2024-07-15 14:05:42.617945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.984 [2024-07-15 14:05:42.617976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.984 [2024-07-15 14:05:42.617993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.984 [2024-07-15 14:05:42.618006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.984 [2024-07-15 14:05:42.618036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.984 qpair failed and we were unable to recover it. 00:26:47.984 [2024-07-15 14:05:42.627872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.984 [2024-07-15 14:05:42.627973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.984 [2024-07-15 14:05:42.627998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.984 [2024-07-15 14:05:42.628029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.984 [2024-07-15 14:05:42.628042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.984 [2024-07-15 14:05:42.628072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.984 qpair failed and we were unable to recover it. 00:26:47.984 [2024-07-15 14:05:42.637851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.984 [2024-07-15 14:05:42.637951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.984 [2024-07-15 14:05:42.637978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.984 [2024-07-15 14:05:42.637993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.984 [2024-07-15 14:05:42.638006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.984 [2024-07-15 14:05:42.638051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.984 qpair failed and we were unable to recover it. 00:26:47.984 [2024-07-15 14:05:42.647911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.984 [2024-07-15 14:05:42.648010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.984 [2024-07-15 14:05:42.648051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.984 [2024-07-15 14:05:42.648066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.984 [2024-07-15 14:05:42.648078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.984 [2024-07-15 14:05:42.648108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.984 qpair failed and we were unable to recover it. 00:26:47.984 [2024-07-15 14:05:42.657924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.984 [2024-07-15 14:05:42.658037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.984 [2024-07-15 14:05:42.658063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.984 [2024-07-15 14:05:42.658078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.984 [2024-07-15 14:05:42.658090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.984 [2024-07-15 14:05:42.658125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.984 qpair failed and we were unable to recover it. 00:26:47.984 [2024-07-15 14:05:42.667964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.984 [2024-07-15 14:05:42.668080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.984 [2024-07-15 14:05:42.668104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.984 [2024-07-15 14:05:42.668120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.984 [2024-07-15 14:05:42.668132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.984 [2024-07-15 14:05:42.668162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.984 qpair failed and we were unable to recover it. 00:26:47.984 [2024-07-15 14:05:42.677985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.984 [2024-07-15 14:05:42.678098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.984 [2024-07-15 14:05:42.678123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.984 [2024-07-15 14:05:42.678138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.984 [2024-07-15 14:05:42.678150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.984 [2024-07-15 14:05:42.678179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.984 qpair failed and we were unable to recover it. 00:26:47.984 [2024-07-15 14:05:42.688000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.984 [2024-07-15 14:05:42.688113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.984 [2024-07-15 14:05:42.688139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.984 [2024-07-15 14:05:42.688155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.984 [2024-07-15 14:05:42.688167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.984 [2024-07-15 14:05:42.688196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.984 qpair failed and we were unable to recover it. 00:26:47.985 [2024-07-15 14:05:42.698119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.985 [2024-07-15 14:05:42.698216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.985 [2024-07-15 14:05:42.698240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.985 [2024-07-15 14:05:42.698255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.985 [2024-07-15 14:05:42.698267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.985 [2024-07-15 14:05:42.698297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.985 qpair failed and we were unable to recover it. 00:26:47.985 [2024-07-15 14:05:42.708109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.985 [2024-07-15 14:05:42.708252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.985 [2024-07-15 14:05:42.708278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.985 [2024-07-15 14:05:42.708293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.985 [2024-07-15 14:05:42.708306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.985 [2024-07-15 14:05:42.708335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.985 qpair failed and we were unable to recover it. 00:26:47.985 [2024-07-15 14:05:42.718153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.985 [2024-07-15 14:05:42.718289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.985 [2024-07-15 14:05:42.718315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.985 [2024-07-15 14:05:42.718331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.985 [2024-07-15 14:05:42.718343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.985 [2024-07-15 14:05:42.718372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.985 qpair failed and we were unable to recover it. 00:26:47.985 [2024-07-15 14:05:42.728109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.985 [2024-07-15 14:05:42.728205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.985 [2024-07-15 14:05:42.728230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.985 [2024-07-15 14:05:42.728244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.985 [2024-07-15 14:05:42.728257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.985 [2024-07-15 14:05:42.728286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.985 qpair failed and we were unable to recover it. 00:26:47.985 [2024-07-15 14:05:42.738144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.985 [2024-07-15 14:05:42.738240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.985 [2024-07-15 14:05:42.738267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.985 [2024-07-15 14:05:42.738282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.985 [2024-07-15 14:05:42.738295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.985 [2024-07-15 14:05:42.738323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.985 qpair failed and we were unable to recover it. 00:26:47.985 [2024-07-15 14:05:42.748267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.985 [2024-07-15 14:05:42.748366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.985 [2024-07-15 14:05:42.748390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.985 [2024-07-15 14:05:42.748404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.985 [2024-07-15 14:05:42.748423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.985 [2024-07-15 14:05:42.748452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.985 qpair failed and we were unable to recover it. 00:26:47.985 [2024-07-15 14:05:42.758198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.985 [2024-07-15 14:05:42.758316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.985 [2024-07-15 14:05:42.758341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.985 [2024-07-15 14:05:42.758356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.985 [2024-07-15 14:05:42.758369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.985 [2024-07-15 14:05:42.758398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.985 qpair failed and we were unable to recover it. 00:26:47.985 [2024-07-15 14:05:42.768346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.985 [2024-07-15 14:05:42.768439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.985 [2024-07-15 14:05:42.768464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.985 [2024-07-15 14:05:42.768478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.985 [2024-07-15 14:05:42.768491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.985 [2024-07-15 14:05:42.768519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.985 qpair failed and we were unable to recover it. 00:26:47.985 [2024-07-15 14:05:42.778254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.985 [2024-07-15 14:05:42.778353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.985 [2024-07-15 14:05:42.778378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.985 [2024-07-15 14:05:42.778392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.985 [2024-07-15 14:05:42.778405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.985 [2024-07-15 14:05:42.778434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.985 qpair failed and we were unable to recover it. 00:26:47.985 [2024-07-15 14:05:42.788384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.985 [2024-07-15 14:05:42.788512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.985 [2024-07-15 14:05:42.788538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.985 [2024-07-15 14:05:42.788553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.985 [2024-07-15 14:05:42.788565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.985 [2024-07-15 14:05:42.788594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.985 qpair failed and we were unable to recover it. 00:26:47.985 [2024-07-15 14:05:42.798314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.985 [2024-07-15 14:05:42.798452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.985 [2024-07-15 14:05:42.798478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.985 [2024-07-15 14:05:42.798493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.985 [2024-07-15 14:05:42.798505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.985 [2024-07-15 14:05:42.798535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.985 qpair failed and we were unable to recover it. 00:26:47.985 [2024-07-15 14:05:42.808327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.985 [2024-07-15 14:05:42.808421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.985 [2024-07-15 14:05:42.808445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.985 [2024-07-15 14:05:42.808460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.985 [2024-07-15 14:05:42.808473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.985 [2024-07-15 14:05:42.808501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.985 qpair failed and we were unable to recover it. 00:26:47.985 [2024-07-15 14:05:42.818332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.985 [2024-07-15 14:05:42.818424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.985 [2024-07-15 14:05:42.818448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.985 [2024-07-15 14:05:42.818462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.985 [2024-07-15 14:05:42.818474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:47.985 [2024-07-15 14:05:42.818504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.985 qpair failed and we were unable to recover it. 00:26:48.244 [2024-07-15 14:05:42.828401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.244 [2024-07-15 14:05:42.828512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.244 [2024-07-15 14:05:42.828537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.244 [2024-07-15 14:05:42.828552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.244 [2024-07-15 14:05:42.828565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.244 [2024-07-15 14:05:42.828594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.244 qpair failed and we were unable to recover it. 00:26:48.244 [2024-07-15 14:05:42.838435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.244 [2024-07-15 14:05:42.838533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.244 [2024-07-15 14:05:42.838560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.244 [2024-07-15 14:05:42.838580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.244 [2024-07-15 14:05:42.838594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.244 [2024-07-15 14:05:42.838622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.244 qpair failed and we were unable to recover it. 00:26:48.244 [2024-07-15 14:05:42.848418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.244 [2024-07-15 14:05:42.848521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.244 [2024-07-15 14:05:42.848545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.244 [2024-07-15 14:05:42.848560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.244 [2024-07-15 14:05:42.848573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.244 [2024-07-15 14:05:42.848601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.244 qpair failed and we were unable to recover it. 00:26:48.244 [2024-07-15 14:05:42.858547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.244 [2024-07-15 14:05:42.858676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.244 [2024-07-15 14:05:42.858703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.244 [2024-07-15 14:05:42.858732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.244 [2024-07-15 14:05:42.858754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.244 [2024-07-15 14:05:42.858786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.244 qpair failed and we were unable to recover it. 00:26:48.244 [2024-07-15 14:05:42.868529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.244 [2024-07-15 14:05:42.868630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.244 [2024-07-15 14:05:42.868654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.244 [2024-07-15 14:05:42.868669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.244 [2024-07-15 14:05:42.868682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.244 [2024-07-15 14:05:42.868711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.244 qpair failed and we were unable to recover it. 00:26:48.244 [2024-07-15 14:05:42.878501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.244 [2024-07-15 14:05:42.878599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.244 [2024-07-15 14:05:42.878623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.244 [2024-07-15 14:05:42.878637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.244 [2024-07-15 14:05:42.878651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.244 [2024-07-15 14:05:42.878679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.244 qpair failed and we were unable to recover it. 00:26:48.244 [2024-07-15 14:05:42.888534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.244 [2024-07-15 14:05:42.888626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.244 [2024-07-15 14:05:42.888650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.244 [2024-07-15 14:05:42.888665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.244 [2024-07-15 14:05:42.888678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.244 [2024-07-15 14:05:42.888707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.244 qpair failed and we were unable to recover it. 00:26:48.244 [2024-07-15 14:05:42.898665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.244 [2024-07-15 14:05:42.898800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.244 [2024-07-15 14:05:42.898828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.244 [2024-07-15 14:05:42.898843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.244 [2024-07-15 14:05:42.898856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.244 [2024-07-15 14:05:42.898886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.244 qpair failed and we were unable to recover it. 00:26:48.244 [2024-07-15 14:05:42.908635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.244 [2024-07-15 14:05:42.908773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.244 [2024-07-15 14:05:42.908800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.244 [2024-07-15 14:05:42.908816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.244 [2024-07-15 14:05:42.908829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.244 [2024-07-15 14:05:42.908859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.244 qpair failed and we were unable to recover it. 00:26:48.244 [2024-07-15 14:05:42.918624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.244 [2024-07-15 14:05:42.918755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.244 [2024-07-15 14:05:42.918782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.244 [2024-07-15 14:05:42.918798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.244 [2024-07-15 14:05:42.918811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.244 [2024-07-15 14:05:42.918841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.244 qpair failed and we were unable to recover it. 00:26:48.244 [2024-07-15 14:05:42.928652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.244 [2024-07-15 14:05:42.928786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.244 [2024-07-15 14:05:42.928818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.244 [2024-07-15 14:05:42.928835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.244 [2024-07-15 14:05:42.928847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.244 [2024-07-15 14:05:42.928878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.244 qpair failed and we were unable to recover it. 00:26:48.244 [2024-07-15 14:05:42.938668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.245 [2024-07-15 14:05:42.938782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.245 [2024-07-15 14:05:42.938806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.245 [2024-07-15 14:05:42.938822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.245 [2024-07-15 14:05:42.938834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.245 [2024-07-15 14:05:42.938864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.245 qpair failed and we were unable to recover it. 00:26:48.245 [2024-07-15 14:05:42.948807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.245 [2024-07-15 14:05:42.948907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.245 [2024-07-15 14:05:42.948932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.245 [2024-07-15 14:05:42.948948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.245 [2024-07-15 14:05:42.948960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.245 [2024-07-15 14:05:42.948990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.245 qpair failed and we were unable to recover it. 00:26:48.245 [2024-07-15 14:05:42.958792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.245 [2024-07-15 14:05:42.958934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.245 [2024-07-15 14:05:42.958961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.245 [2024-07-15 14:05:42.958976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.245 [2024-07-15 14:05:42.958989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.245 [2024-07-15 14:05:42.959034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.245 qpair failed and we were unable to recover it. 00:26:48.245 [2024-07-15 14:05:42.968794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.245 [2024-07-15 14:05:42.968916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.245 [2024-07-15 14:05:42.968942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.245 [2024-07-15 14:05:42.968958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.245 [2024-07-15 14:05:42.968971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.245 [2024-07-15 14:05:42.969007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.245 qpair failed and we were unable to recover it. 00:26:48.245 [2024-07-15 14:05:42.978859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.245 [2024-07-15 14:05:42.978964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.245 [2024-07-15 14:05:42.978990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.245 [2024-07-15 14:05:42.979006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.245 [2024-07-15 14:05:42.979035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.245 [2024-07-15 14:05:42.979065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.245 qpair failed and we were unable to recover it. 00:26:48.245 [2024-07-15 14:05:42.988871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.245 [2024-07-15 14:05:42.989000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.245 [2024-07-15 14:05:42.989041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.245 [2024-07-15 14:05:42.989056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.245 [2024-07-15 14:05:42.989069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.245 [2024-07-15 14:05:42.989098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.245 qpair failed and we were unable to recover it. 00:26:48.245 [2024-07-15 14:05:42.998872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.245 [2024-07-15 14:05:42.998970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.245 [2024-07-15 14:05:42.998995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.245 [2024-07-15 14:05:42.999010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.245 [2024-07-15 14:05:42.999038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.245 [2024-07-15 14:05:42.999067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.245 qpair failed and we were unable to recover it. 00:26:48.245 [2024-07-15 14:05:43.008936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.245 [2024-07-15 14:05:43.009046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.245 [2024-07-15 14:05:43.009071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.245 [2024-07-15 14:05:43.009085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.245 [2024-07-15 14:05:43.009098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.245 [2024-07-15 14:05:43.009127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.245 qpair failed and we were unable to recover it. 00:26:48.245 [2024-07-15 14:05:43.018902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.245 [2024-07-15 14:05:43.018997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.245 [2024-07-15 14:05:43.019043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.245 [2024-07-15 14:05:43.019058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.245 [2024-07-15 14:05:43.019071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.245 [2024-07-15 14:05:43.019102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.245 qpair failed and we were unable to recover it. 00:26:48.245 [2024-07-15 14:05:43.028973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.245 [2024-07-15 14:05:43.029090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.245 [2024-07-15 14:05:43.029115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.245 [2024-07-15 14:05:43.029130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.245 [2024-07-15 14:05:43.029143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.245 [2024-07-15 14:05:43.029172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.245 qpair failed and we were unable to recover it. 00:26:48.245 [2024-07-15 14:05:43.038988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.245 [2024-07-15 14:05:43.039105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.245 [2024-07-15 14:05:43.039131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.245 [2024-07-15 14:05:43.039146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.245 [2024-07-15 14:05:43.039159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.245 [2024-07-15 14:05:43.039188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.245 qpair failed and we were unable to recover it. 00:26:48.245 [2024-07-15 14:05:43.049048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.245 [2024-07-15 14:05:43.049158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.245 [2024-07-15 14:05:43.049184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.245 [2024-07-15 14:05:43.049199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.245 [2024-07-15 14:05:43.049212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.245 [2024-07-15 14:05:43.049243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.245 qpair failed and we were unable to recover it. 00:26:48.245 [2024-07-15 14:05:43.059047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.245 [2024-07-15 14:05:43.059160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.245 [2024-07-15 14:05:43.059186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.245 [2024-07-15 14:05:43.059201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.245 [2024-07-15 14:05:43.059214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.245 [2024-07-15 14:05:43.059248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.245 qpair failed and we were unable to recover it. 00:26:48.245 [2024-07-15 14:05:43.069115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.245 [2024-07-15 14:05:43.069251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.245 [2024-07-15 14:05:43.069277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.245 [2024-07-15 14:05:43.069292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.245 [2024-07-15 14:05:43.069305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.245 [2024-07-15 14:05:43.069333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.245 qpair failed and we were unable to recover it. 00:26:48.245 [2024-07-15 14:05:43.079159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.245 [2024-07-15 14:05:43.079258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.246 [2024-07-15 14:05:43.079284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.246 [2024-07-15 14:05:43.079300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.246 [2024-07-15 14:05:43.079312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.246 [2024-07-15 14:05:43.079342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.246 qpair failed and we were unable to recover it. 00:26:48.516 [2024-07-15 14:05:43.089125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.516 [2024-07-15 14:05:43.089251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.516 [2024-07-15 14:05:43.089276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.517 [2024-07-15 14:05:43.089290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.517 [2024-07-15 14:05:43.089303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.517 [2024-07-15 14:05:43.089332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.517 qpair failed and we were unable to recover it. 00:26:48.517 [2024-07-15 14:05:43.099143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.517 [2024-07-15 14:05:43.099238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.517 [2024-07-15 14:05:43.099262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.517 [2024-07-15 14:05:43.099276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.517 [2024-07-15 14:05:43.099289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.517 [2024-07-15 14:05:43.099317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.517 qpair failed and we were unable to recover it. 00:26:48.517 [2024-07-15 14:05:43.109206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.517 [2024-07-15 14:05:43.109301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.517 [2024-07-15 14:05:43.109330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.517 [2024-07-15 14:05:43.109345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.517 [2024-07-15 14:05:43.109358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.517 [2024-07-15 14:05:43.109387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.517 qpair failed and we were unable to recover it. 00:26:48.517 [2024-07-15 14:05:43.119288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.517 [2024-07-15 14:05:43.119385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.517 [2024-07-15 14:05:43.119409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.517 [2024-07-15 14:05:43.119424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.518 [2024-07-15 14:05:43.119436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.518 [2024-07-15 14:05:43.119464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.518 qpair failed and we were unable to recover it. 00:26:48.518 [2024-07-15 14:05:43.129223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.518 [2024-07-15 14:05:43.129330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.518 [2024-07-15 14:05:43.129356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.518 [2024-07-15 14:05:43.129371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.518 [2024-07-15 14:05:43.129383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.518 [2024-07-15 14:05:43.129412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.518 qpair failed and we were unable to recover it. 00:26:48.518 [2024-07-15 14:05:43.139261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.518 [2024-07-15 14:05:43.139354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.518 [2024-07-15 14:05:43.139378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.518 [2024-07-15 14:05:43.139392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.518 [2024-07-15 14:05:43.139404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.518 [2024-07-15 14:05:43.139432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.518 qpair failed and we were unable to recover it. 00:26:48.518 [2024-07-15 14:05:43.149320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.518 [2024-07-15 14:05:43.149430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.518 [2024-07-15 14:05:43.149456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.518 [2024-07-15 14:05:43.149471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.518 [2024-07-15 14:05:43.149489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.518 [2024-07-15 14:05:43.149518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.518 qpair failed and we were unable to recover it. 00:26:48.518 [2024-07-15 14:05:43.159332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.518 [2024-07-15 14:05:43.159435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.518 [2024-07-15 14:05:43.159461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.518 [2024-07-15 14:05:43.159476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.518 [2024-07-15 14:05:43.159488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.519 [2024-07-15 14:05:43.159517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.519 qpair failed and we were unable to recover it. 00:26:48.519 [2024-07-15 14:05:43.169337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.519 [2024-07-15 14:05:43.169464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.519 [2024-07-15 14:05:43.169489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.519 [2024-07-15 14:05:43.169504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.519 [2024-07-15 14:05:43.169517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.519 [2024-07-15 14:05:43.169546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.519 qpair failed and we were unable to recover it. 00:26:48.519 [2024-07-15 14:05:43.179348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.519 [2024-07-15 14:05:43.179439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.519 [2024-07-15 14:05:43.179463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.519 [2024-07-15 14:05:43.179478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.519 [2024-07-15 14:05:43.179490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.519 [2024-07-15 14:05:43.179519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.519 qpair failed and we were unable to recover it. 00:26:48.519 [2024-07-15 14:05:43.189429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.519 [2024-07-15 14:05:43.189547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.519 [2024-07-15 14:05:43.189573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.519 [2024-07-15 14:05:43.189588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.519 [2024-07-15 14:05:43.189601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.519 [2024-07-15 14:05:43.189630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.519 qpair failed and we were unable to recover it. 00:26:48.519 [2024-07-15 14:05:43.199412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.520 [2024-07-15 14:05:43.199517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.520 [2024-07-15 14:05:43.199541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.520 [2024-07-15 14:05:43.199556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.520 [2024-07-15 14:05:43.199568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.520 [2024-07-15 14:05:43.199597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.520 qpair failed and we were unable to recover it. 00:26:48.520 [2024-07-15 14:05:43.209435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.520 [2024-07-15 14:05:43.209530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.520 [2024-07-15 14:05:43.209554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.520 [2024-07-15 14:05:43.209568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.520 [2024-07-15 14:05:43.209581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.520 [2024-07-15 14:05:43.209610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.520 qpair failed and we were unable to recover it. 00:26:48.520 [2024-07-15 14:05:43.219459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.520 [2024-07-15 14:05:43.219551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.520 [2024-07-15 14:05:43.219576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.520 [2024-07-15 14:05:43.219591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.520 [2024-07-15 14:05:43.219603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.520 [2024-07-15 14:05:43.219632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.520 qpair failed and we were unable to recover it. 00:26:48.520 [2024-07-15 14:05:43.229507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.520 [2024-07-15 14:05:43.229610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.520 [2024-07-15 14:05:43.229636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.520 [2024-07-15 14:05:43.229651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.520 [2024-07-15 14:05:43.229664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.520 [2024-07-15 14:05:43.229693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.520 qpair failed and we were unable to recover it. 00:26:48.521 [2024-07-15 14:05:43.239609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.521 [2024-07-15 14:05:43.239701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.521 [2024-07-15 14:05:43.239750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.521 [2024-07-15 14:05:43.239773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.521 [2024-07-15 14:05:43.239787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.521 [2024-07-15 14:05:43.239817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.521 qpair failed and we were unable to recover it. 00:26:48.521 [2024-07-15 14:05:43.249593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.521 [2024-07-15 14:05:43.249710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.521 [2024-07-15 14:05:43.249758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.521 [2024-07-15 14:05:43.249775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.521 [2024-07-15 14:05:43.249788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.521 [2024-07-15 14:05:43.249818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.521 qpair failed and we were unable to recover it. 00:26:48.521 [2024-07-15 14:05:43.259589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.521 [2024-07-15 14:05:43.259680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.521 [2024-07-15 14:05:43.259704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.521 [2024-07-15 14:05:43.259733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.521 [2024-07-15 14:05:43.259756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.522 [2024-07-15 14:05:43.259788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.522 qpair failed and we were unable to recover it. 00:26:48.522 [2024-07-15 14:05:43.269598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.522 [2024-07-15 14:05:43.269700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.522 [2024-07-15 14:05:43.269749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.522 [2024-07-15 14:05:43.269767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.522 [2024-07-15 14:05:43.269780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.522 [2024-07-15 14:05:43.269810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.522 qpair failed and we were unable to recover it. 00:26:48.522 [2024-07-15 14:05:43.279609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.522 [2024-07-15 14:05:43.279704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.522 [2024-07-15 14:05:43.279755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.522 [2024-07-15 14:05:43.279773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.522 [2024-07-15 14:05:43.279785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.522 [2024-07-15 14:05:43.279816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.522 qpair failed and we were unable to recover it. 00:26:48.522 [2024-07-15 14:05:43.289683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.522 [2024-07-15 14:05:43.289836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.522 [2024-07-15 14:05:43.289863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.522 [2024-07-15 14:05:43.289878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.522 [2024-07-15 14:05:43.289891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.522 [2024-07-15 14:05:43.289921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.522 qpair failed and we were unable to recover it. 00:26:48.522 [2024-07-15 14:05:43.299683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.523 [2024-07-15 14:05:43.299800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.523 [2024-07-15 14:05:43.299824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.523 [2024-07-15 14:05:43.299839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.523 [2024-07-15 14:05:43.299853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.523 [2024-07-15 14:05:43.299883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.523 qpair failed and we were unable to recover it. 00:26:48.523 [2024-07-15 14:05:43.309715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.523 [2024-07-15 14:05:43.309883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.523 [2024-07-15 14:05:43.309910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.523 [2024-07-15 14:05:43.309925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.523 [2024-07-15 14:05:43.309938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.523 [2024-07-15 14:05:43.309968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.523 qpair failed and we were unable to recover it. 00:26:48.523 [2024-07-15 14:05:43.319730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.523 [2024-07-15 14:05:43.319836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.523 [2024-07-15 14:05:43.319861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.523 [2024-07-15 14:05:43.319876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.523 [2024-07-15 14:05:43.319889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.523 [2024-07-15 14:05:43.319919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.523 qpair failed and we were unable to recover it. 00:26:48.523 [2024-07-15 14:05:43.329803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.523 [2024-07-15 14:05:43.329909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.523 [2024-07-15 14:05:43.329934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.523 [2024-07-15 14:05:43.329954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.523 [2024-07-15 14:05:43.329967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.523 [2024-07-15 14:05:43.329997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.523 qpair failed and we were unable to recover it. 00:26:48.523 [2024-07-15 14:05:43.339794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.523 [2024-07-15 14:05:43.339923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.523 [2024-07-15 14:05:43.339950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.523 [2024-07-15 14:05:43.339965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.523 [2024-07-15 14:05:43.339977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.523 [2024-07-15 14:05:43.340007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.523 qpair failed and we were unable to recover it. 00:26:48.523 [2024-07-15 14:05:43.349839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.524 [2024-07-15 14:05:43.349938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.524 [2024-07-15 14:05:43.349967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.524 [2024-07-15 14:05:43.349982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.524 [2024-07-15 14:05:43.349994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.524 [2024-07-15 14:05:43.350024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.524 qpair failed and we were unable to recover it. 00:26:48.785 [2024-07-15 14:05:43.359875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.785 [2024-07-15 14:05:43.360005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.785 [2024-07-15 14:05:43.360031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.785 [2024-07-15 14:05:43.360047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.785 [2024-07-15 14:05:43.360059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.785 [2024-07-15 14:05:43.360088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.785 qpair failed and we were unable to recover it. 00:26:48.785 [2024-07-15 14:05:43.369968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.785 [2024-07-15 14:05:43.370065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.785 [2024-07-15 14:05:43.370091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.785 [2024-07-15 14:05:43.370106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.785 [2024-07-15 14:05:43.370118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.785 [2024-07-15 14:05:43.370148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.785 qpair failed and we were unable to recover it. 00:26:48.785 [2024-07-15 14:05:43.379920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.785 [2024-07-15 14:05:43.380026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.785 [2024-07-15 14:05:43.380051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.785 [2024-07-15 14:05:43.380066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.785 [2024-07-15 14:05:43.380079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.785 [2024-07-15 14:05:43.380108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.785 qpair failed and we were unable to recover it. 00:26:48.785 [2024-07-15 14:05:43.389947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.785 [2024-07-15 14:05:43.390052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.785 [2024-07-15 14:05:43.390078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.785 [2024-07-15 14:05:43.390093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.785 [2024-07-15 14:05:43.390105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.785 [2024-07-15 14:05:43.390134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.785 qpair failed and we were unable to recover it. 00:26:48.785 [2024-07-15 14:05:43.400009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.785 [2024-07-15 14:05:43.400138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.785 [2024-07-15 14:05:43.400163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.785 [2024-07-15 14:05:43.400177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.785 [2024-07-15 14:05:43.400189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.785 [2024-07-15 14:05:43.400218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.785 qpair failed and we were unable to recover it. 00:26:48.785 [2024-07-15 14:05:43.409987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.785 [2024-07-15 14:05:43.410081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.785 [2024-07-15 14:05:43.410107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.785 [2024-07-15 14:05:43.410122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.785 [2024-07-15 14:05:43.410135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.785 [2024-07-15 14:05:43.410164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.785 qpair failed and we were unable to recover it. 00:26:48.785 [2024-07-15 14:05:43.420119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.785 [2024-07-15 14:05:43.420228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.785 [2024-07-15 14:05:43.420259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.785 [2024-07-15 14:05:43.420274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.785 [2024-07-15 14:05:43.420287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.785 [2024-07-15 14:05:43.420317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.785 qpair failed and we were unable to recover it. 00:26:48.785 [2024-07-15 14:05:43.430059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.785 [2024-07-15 14:05:43.430174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.785 [2024-07-15 14:05:43.430200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.785 [2024-07-15 14:05:43.430215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.785 [2024-07-15 14:05:43.430228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.785 [2024-07-15 14:05:43.430256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.785 qpair failed and we were unable to recover it. 00:26:48.785 [2024-07-15 14:05:43.440091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.785 [2024-07-15 14:05:43.440208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.785 [2024-07-15 14:05:43.440234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.786 [2024-07-15 14:05:43.440249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.786 [2024-07-15 14:05:43.440262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.786 [2024-07-15 14:05:43.440292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.786 qpair failed and we were unable to recover it. 00:26:48.786 [2024-07-15 14:05:43.450192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.786 [2024-07-15 14:05:43.450293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.786 [2024-07-15 14:05:43.450319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.786 [2024-07-15 14:05:43.450334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.786 [2024-07-15 14:05:43.450351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.786 [2024-07-15 14:05:43.450380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.786 qpair failed and we were unable to recover it. 00:26:48.786 [2024-07-15 14:05:43.460137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.786 [2024-07-15 14:05:43.460263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.786 [2024-07-15 14:05:43.460289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.786 [2024-07-15 14:05:43.460304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.786 [2024-07-15 14:05:43.460316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.786 [2024-07-15 14:05:43.460351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.786 qpair failed and we were unable to recover it. 00:26:48.786 [2024-07-15 14:05:43.470185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.786 [2024-07-15 14:05:43.470299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.786 [2024-07-15 14:05:43.470325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.786 [2024-07-15 14:05:43.470340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.786 [2024-07-15 14:05:43.470353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.786 [2024-07-15 14:05:43.470382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.786 qpair failed and we were unable to recover it. 00:26:48.786 [2024-07-15 14:05:43.480207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.786 [2024-07-15 14:05:43.480319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.786 [2024-07-15 14:05:43.480344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.786 [2024-07-15 14:05:43.480359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.786 [2024-07-15 14:05:43.480372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.786 [2024-07-15 14:05:43.480402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.786 qpair failed and we were unable to recover it. 00:26:48.786 [2024-07-15 14:05:43.490232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.786 [2024-07-15 14:05:43.490353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.786 [2024-07-15 14:05:43.490380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.786 [2024-07-15 14:05:43.490396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.786 [2024-07-15 14:05:43.490409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.786 [2024-07-15 14:05:43.490438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.786 qpair failed and we were unable to recover it. 00:26:48.786 [2024-07-15 14:05:43.500239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.786 [2024-07-15 14:05:43.500348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.786 [2024-07-15 14:05:43.500375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.786 [2024-07-15 14:05:43.500391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.786 [2024-07-15 14:05:43.500403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.786 [2024-07-15 14:05:43.500432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.786 qpair failed and we were unable to recover it. 00:26:48.786 [2024-07-15 14:05:43.510288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.786 [2024-07-15 14:05:43.510420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.786 [2024-07-15 14:05:43.510451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.786 [2024-07-15 14:05:43.510467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.786 [2024-07-15 14:05:43.510479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.786 [2024-07-15 14:05:43.510509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.786 qpair failed and we were unable to recover it. 00:26:48.786 [2024-07-15 14:05:43.520392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.786 [2024-07-15 14:05:43.520527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.786 [2024-07-15 14:05:43.520553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.786 [2024-07-15 14:05:43.520567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.786 [2024-07-15 14:05:43.520579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.786 [2024-07-15 14:05:43.520608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.786 qpair failed and we were unable to recover it. 00:26:48.786 [2024-07-15 14:05:43.530324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.786 [2024-07-15 14:05:43.530417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.786 [2024-07-15 14:05:43.530443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.786 [2024-07-15 14:05:43.530457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.786 [2024-07-15 14:05:43.530470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.786 [2024-07-15 14:05:43.530499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.786 qpair failed and we were unable to recover it. 00:26:48.786 [2024-07-15 14:05:43.540396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.786 [2024-07-15 14:05:43.540541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.786 [2024-07-15 14:05:43.540567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.786 [2024-07-15 14:05:43.540582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.786 [2024-07-15 14:05:43.540594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.786 [2024-07-15 14:05:43.540623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.786 qpair failed and we were unable to recover it. 00:26:48.786 [2024-07-15 14:05:43.550426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.786 [2024-07-15 14:05:43.550544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.786 [2024-07-15 14:05:43.550568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.786 [2024-07-15 14:05:43.550583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.786 [2024-07-15 14:05:43.550600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.786 [2024-07-15 14:05:43.550630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.787 qpair failed and we were unable to recover it. 00:26:48.787 [2024-07-15 14:05:43.560413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.787 [2024-07-15 14:05:43.560533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.787 [2024-07-15 14:05:43.560560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.787 [2024-07-15 14:05:43.560575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.787 [2024-07-15 14:05:43.560587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.787 [2024-07-15 14:05:43.560616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.787 qpair failed and we were unable to recover it. 00:26:48.787 [2024-07-15 14:05:43.570480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.787 [2024-07-15 14:05:43.570594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.787 [2024-07-15 14:05:43.570620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.787 [2024-07-15 14:05:43.570635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.787 [2024-07-15 14:05:43.570648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.787 [2024-07-15 14:05:43.570677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.787 qpair failed and we were unable to recover it. 00:26:48.787 [2024-07-15 14:05:43.580505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.787 [2024-07-15 14:05:43.580613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.787 [2024-07-15 14:05:43.580639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.787 [2024-07-15 14:05:43.580654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.787 [2024-07-15 14:05:43.580666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.787 [2024-07-15 14:05:43.580695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.787 qpair failed and we were unable to recover it. 00:26:48.787 [2024-07-15 14:05:43.590540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.787 [2024-07-15 14:05:43.590696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.787 [2024-07-15 14:05:43.590721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.787 [2024-07-15 14:05:43.590743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.787 [2024-07-15 14:05:43.590758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.787 [2024-07-15 14:05:43.590788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.787 qpair failed and we were unable to recover it. 00:26:48.787 [2024-07-15 14:05:43.600571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.787 [2024-07-15 14:05:43.600691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.787 [2024-07-15 14:05:43.600717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.787 [2024-07-15 14:05:43.600732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.787 [2024-07-15 14:05:43.600753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.787 [2024-07-15 14:05:43.600783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.787 qpair failed and we were unable to recover it. 00:26:48.787 [2024-07-15 14:05:43.610616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.787 [2024-07-15 14:05:43.610758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.787 [2024-07-15 14:05:43.610785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.787 [2024-07-15 14:05:43.610799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.787 [2024-07-15 14:05:43.610812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.787 [2024-07-15 14:05:43.610841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.787 qpair failed and we were unable to recover it. 00:26:48.787 [2024-07-15 14:05:43.620613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.787 [2024-07-15 14:05:43.620724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.787 [2024-07-15 14:05:43.620757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.787 [2024-07-15 14:05:43.620773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.787 [2024-07-15 14:05:43.620785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:48.787 [2024-07-15 14:05:43.620815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.787 qpair failed and we were unable to recover it. 00:26:49.048 [2024-07-15 14:05:43.630654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.048 [2024-07-15 14:05:43.630777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.048 [2024-07-15 14:05:43.630804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.048 [2024-07-15 14:05:43.630819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.048 [2024-07-15 14:05:43.630831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.048 [2024-07-15 14:05:43.630860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.048 qpair failed and we were unable to recover it. 00:26:49.048 [2024-07-15 14:05:43.640678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.048 [2024-07-15 14:05:43.640801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.048 [2024-07-15 14:05:43.640828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.048 [2024-07-15 14:05:43.640852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.048 [2024-07-15 14:05:43.640866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.048 [2024-07-15 14:05:43.640896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.048 qpair failed and we were unable to recover it. 00:26:49.048 [2024-07-15 14:05:43.650713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.048 [2024-07-15 14:05:43.650836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.048 [2024-07-15 14:05:43.650862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.048 [2024-07-15 14:05:43.650877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.048 [2024-07-15 14:05:43.650889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.048 [2024-07-15 14:05:43.650918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.048 qpair failed and we were unable to recover it. 00:26:49.048 [2024-07-15 14:05:43.660793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.048 [2024-07-15 14:05:43.660886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.048 [2024-07-15 14:05:43.660912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.048 [2024-07-15 14:05:43.660927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.048 [2024-07-15 14:05:43.660940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.048 [2024-07-15 14:05:43.660969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.048 qpair failed and we were unable to recover it. 00:26:49.048 [2024-07-15 14:05:43.670841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.048 [2024-07-15 14:05:43.670942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.048 [2024-07-15 14:05:43.670968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.048 [2024-07-15 14:05:43.670983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.048 [2024-07-15 14:05:43.670995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.048 [2024-07-15 14:05:43.671024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.048 qpair failed and we were unable to recover it. 00:26:49.048 [2024-07-15 14:05:43.680849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.048 [2024-07-15 14:05:43.680954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.048 [2024-07-15 14:05:43.680981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.048 [2024-07-15 14:05:43.680996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.048 [2024-07-15 14:05:43.681008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.048 [2024-07-15 14:05:43.681038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.048 qpair failed and we were unable to recover it. 00:26:49.048 [2024-07-15 14:05:43.690858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.048 [2024-07-15 14:05:43.690983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.048 [2024-07-15 14:05:43.691009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.048 [2024-07-15 14:05:43.691024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.048 [2024-07-15 14:05:43.691036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.048 [2024-07-15 14:05:43.691065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.048 qpair failed and we were unable to recover it. 00:26:49.048 [2024-07-15 14:05:43.700835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.048 [2024-07-15 14:05:43.700932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.048 [2024-07-15 14:05:43.700958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.048 [2024-07-15 14:05:43.700972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.048 [2024-07-15 14:05:43.700985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.048 [2024-07-15 14:05:43.701014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.048 qpair failed and we were unable to recover it. 00:26:49.048 [2024-07-15 14:05:43.710898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.048 [2024-07-15 14:05:43.711031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.048 [2024-07-15 14:05:43.711057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.048 [2024-07-15 14:05:43.711071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.048 [2024-07-15 14:05:43.711083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.048 [2024-07-15 14:05:43.711113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.048 qpair failed and we were unable to recover it. 00:26:49.048 [2024-07-15 14:05:43.720914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.048 [2024-07-15 14:05:43.721020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.048 [2024-07-15 14:05:43.721046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.048 [2024-07-15 14:05:43.721061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.048 [2024-07-15 14:05:43.721073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.048 [2024-07-15 14:05:43.721103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.048 qpair failed and we were unable to recover it. 00:26:49.048 [2024-07-15 14:05:43.730994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.048 [2024-07-15 14:05:43.731119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.048 [2024-07-15 14:05:43.731143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.048 [2024-07-15 14:05:43.731163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.048 [2024-07-15 14:05:43.731176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.048 [2024-07-15 14:05:43.731206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.048 qpair failed and we were unable to recover it. 00:26:49.048 [2024-07-15 14:05:43.740929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.048 [2024-07-15 14:05:43.741064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.048 [2024-07-15 14:05:43.741090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.048 [2024-07-15 14:05:43.741104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.048 [2024-07-15 14:05:43.741117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.048 [2024-07-15 14:05:43.741145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.048 qpair failed and we were unable to recover it. 00:26:49.048 [2024-07-15 14:05:43.751080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.048 [2024-07-15 14:05:43.751213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.048 [2024-07-15 14:05:43.751239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.048 [2024-07-15 14:05:43.751255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.048 [2024-07-15 14:05:43.751267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.049 [2024-07-15 14:05:43.751296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.049 qpair failed and we were unable to recover it. 00:26:49.049 [2024-07-15 14:05:43.761079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.049 [2024-07-15 14:05:43.761242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.049 [2024-07-15 14:05:43.761268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.049 [2024-07-15 14:05:43.761283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.049 [2024-07-15 14:05:43.761295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.049 [2024-07-15 14:05:43.761323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.049 qpair failed and we were unable to recover it. 00:26:49.049 [2024-07-15 14:05:43.771031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.049 [2024-07-15 14:05:43.771159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.049 [2024-07-15 14:05:43.771184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.049 [2024-07-15 14:05:43.771199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.049 [2024-07-15 14:05:43.771211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.049 [2024-07-15 14:05:43.771241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.049 qpair failed and we were unable to recover it. 00:26:49.049 [2024-07-15 14:05:43.781099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.049 [2024-07-15 14:05:43.781219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.049 [2024-07-15 14:05:43.781245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.049 [2024-07-15 14:05:43.781260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.049 [2024-07-15 14:05:43.781272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.049 [2024-07-15 14:05:43.781302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.049 qpair failed and we were unable to recover it. 00:26:49.049 [2024-07-15 14:05:43.791134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.049 [2024-07-15 14:05:43.791255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.049 [2024-07-15 14:05:43.791282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.049 [2024-07-15 14:05:43.791297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.049 [2024-07-15 14:05:43.791309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.049 [2024-07-15 14:05:43.791338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.049 qpair failed and we were unable to recover it. 00:26:49.049 [2024-07-15 14:05:43.801136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.049 [2024-07-15 14:05:43.801252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.049 [2024-07-15 14:05:43.801277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.049 [2024-07-15 14:05:43.801291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.049 [2024-07-15 14:05:43.801304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.049 [2024-07-15 14:05:43.801333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.049 qpair failed and we were unable to recover it. 00:26:49.049 [2024-07-15 14:05:43.811174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.049 [2024-07-15 14:05:43.811288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.049 [2024-07-15 14:05:43.811313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.049 [2024-07-15 14:05:43.811328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.049 [2024-07-15 14:05:43.811340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.049 [2024-07-15 14:05:43.811370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.049 qpair failed and we were unable to recover it. 00:26:49.049 [2024-07-15 14:05:43.821199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.049 [2024-07-15 14:05:43.821313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.049 [2024-07-15 14:05:43.821342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.049 [2024-07-15 14:05:43.821358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.049 [2024-07-15 14:05:43.821370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.049 [2024-07-15 14:05:43.821400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.049 qpair failed and we were unable to recover it. 00:26:49.049 [2024-07-15 14:05:43.831245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.049 [2024-07-15 14:05:43.831363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.049 [2024-07-15 14:05:43.831389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.049 [2024-07-15 14:05:43.831404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.049 [2024-07-15 14:05:43.831416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.049 [2024-07-15 14:05:43.831445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.049 qpair failed and we were unable to recover it. 00:26:49.049 [2024-07-15 14:05:43.841267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.049 [2024-07-15 14:05:43.841383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.049 [2024-07-15 14:05:43.841408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.049 [2024-07-15 14:05:43.841423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.049 [2024-07-15 14:05:43.841435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.049 [2024-07-15 14:05:43.841464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.049 qpair failed and we were unable to recover it. 00:26:49.049 [2024-07-15 14:05:43.851291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.049 [2024-07-15 14:05:43.851407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.049 [2024-07-15 14:05:43.851433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.049 [2024-07-15 14:05:43.851447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.049 [2024-07-15 14:05:43.851459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.049 [2024-07-15 14:05:43.851488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.049 qpair failed and we were unable to recover it. 00:26:49.049 [2024-07-15 14:05:43.861287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.049 [2024-07-15 14:05:43.861410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.049 [2024-07-15 14:05:43.861436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.049 [2024-07-15 14:05:43.861451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.049 [2024-07-15 14:05:43.861463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.049 [2024-07-15 14:05:43.861498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.049 qpair failed and we were unable to recover it. 00:26:49.049 [2024-07-15 14:05:43.871391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.049 [2024-07-15 14:05:43.871517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.049 [2024-07-15 14:05:43.871543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.049 [2024-07-15 14:05:43.871558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.049 [2024-07-15 14:05:43.871571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.049 [2024-07-15 14:05:43.871599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.049 qpair failed and we were unable to recover it. 00:26:49.049 [2024-07-15 14:05:43.881391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.049 [2024-07-15 14:05:43.881491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.049 [2024-07-15 14:05:43.881517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.049 [2024-07-15 14:05:43.881532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.049 [2024-07-15 14:05:43.881544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.049 [2024-07-15 14:05:43.881573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.049 qpair failed and we were unable to recover it. 00:26:49.310 [2024-07-15 14:05:43.891386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.310 [2024-07-15 14:05:43.891540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.310 [2024-07-15 14:05:43.891566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.310 [2024-07-15 14:05:43.891581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.310 [2024-07-15 14:05:43.891593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.310 [2024-07-15 14:05:43.891624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.310 qpair failed and we were unable to recover it. 00:26:49.310 [2024-07-15 14:05:43.901378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.310 [2024-07-15 14:05:43.901488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.310 [2024-07-15 14:05:43.901515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.310 [2024-07-15 14:05:43.901529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.310 [2024-07-15 14:05:43.901542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.310 [2024-07-15 14:05:43.901571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.310 qpair failed and we were unable to recover it. 00:26:49.311 [2024-07-15 14:05:43.911454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.311 [2024-07-15 14:05:43.911572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.311 [2024-07-15 14:05:43.911603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.311 [2024-07-15 14:05:43.911619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.311 [2024-07-15 14:05:43.911631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.311 [2024-07-15 14:05:43.911661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.311 qpair failed and we were unable to recover it. 00:26:49.311 [2024-07-15 14:05:43.921515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.311 [2024-07-15 14:05:43.921629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.311 [2024-07-15 14:05:43.921655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.311 [2024-07-15 14:05:43.921670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.311 [2024-07-15 14:05:43.921683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.311 [2024-07-15 14:05:43.921712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.311 qpair failed and we were unable to recover it. 00:26:49.311 [2024-07-15 14:05:43.931504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.311 [2024-07-15 14:05:43.931611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.311 [2024-07-15 14:05:43.931638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.311 [2024-07-15 14:05:43.931652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.311 [2024-07-15 14:05:43.931665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.311 [2024-07-15 14:05:43.931694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.311 qpair failed and we were unable to recover it. 00:26:49.311 [2024-07-15 14:05:43.941514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.311 [2024-07-15 14:05:43.941644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.311 [2024-07-15 14:05:43.941671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.311 [2024-07-15 14:05:43.941686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.311 [2024-07-15 14:05:43.941698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.311 [2024-07-15 14:05:43.941727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.311 qpair failed and we were unable to recover it. 00:26:49.311 [2024-07-15 14:05:43.951554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.311 [2024-07-15 14:05:43.951671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.311 [2024-07-15 14:05:43.951697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.311 [2024-07-15 14:05:43.951712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.311 [2024-07-15 14:05:43.951730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.311 [2024-07-15 14:05:43.951768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.311 qpair failed and we were unable to recover it. 00:26:49.311 [2024-07-15 14:05:43.961576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.311 [2024-07-15 14:05:43.961690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.311 [2024-07-15 14:05:43.961716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.311 [2024-07-15 14:05:43.961731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.311 [2024-07-15 14:05:43.961751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.311 [2024-07-15 14:05:43.961781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.311 qpair failed and we were unable to recover it. 00:26:49.311 [2024-07-15 14:05:43.971600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.311 [2024-07-15 14:05:43.971715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.311 [2024-07-15 14:05:43.971747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.311 [2024-07-15 14:05:43.971764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.311 [2024-07-15 14:05:43.971776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.311 [2024-07-15 14:05:43.971806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.311 qpair failed and we were unable to recover it. 00:26:49.311 [2024-07-15 14:05:43.981622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.311 [2024-07-15 14:05:43.981757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.311 [2024-07-15 14:05:43.981784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.311 [2024-07-15 14:05:43.981799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.311 [2024-07-15 14:05:43.981811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.311 [2024-07-15 14:05:43.981840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.311 qpair failed and we were unable to recover it. 00:26:49.311 [2024-07-15 14:05:43.991748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.311 [2024-07-15 14:05:43.991854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.311 [2024-07-15 14:05:43.991879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.311 [2024-07-15 14:05:43.991894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.311 [2024-07-15 14:05:43.991906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.311 [2024-07-15 14:05:43.991936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.311 qpair failed and we were unable to recover it. 00:26:49.311 [2024-07-15 14:05:44.001711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.311 [2024-07-15 14:05:44.001848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.311 [2024-07-15 14:05:44.001873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.311 [2024-07-15 14:05:44.001888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.311 [2024-07-15 14:05:44.001900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.311 [2024-07-15 14:05:44.001929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.311 qpair failed and we were unable to recover it. 00:26:49.311 [2024-07-15 14:05:44.011686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.311 [2024-07-15 14:05:44.011820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.311 [2024-07-15 14:05:44.011847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.311 [2024-07-15 14:05:44.011861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.311 [2024-07-15 14:05:44.011874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.311 [2024-07-15 14:05:44.011903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.311 qpair failed and we were unable to recover it. 00:26:49.311 [2024-07-15 14:05:44.021800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.311 [2024-07-15 14:05:44.021922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.311 [2024-07-15 14:05:44.021948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.311 [2024-07-15 14:05:44.021962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.311 [2024-07-15 14:05:44.021975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.311 [2024-07-15 14:05:44.022004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.311 qpair failed and we were unable to recover it. 00:26:49.311 [2024-07-15 14:05:44.031811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.311 [2024-07-15 14:05:44.031950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.311 [2024-07-15 14:05:44.031975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.311 [2024-07-15 14:05:44.031990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.311 [2024-07-15 14:05:44.032002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.311 [2024-07-15 14:05:44.032033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.311 qpair failed and we were unable to recover it. 00:26:49.311 [2024-07-15 14:05:44.041800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.311 [2024-07-15 14:05:44.041939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.311 [2024-07-15 14:05:44.041964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.311 [2024-07-15 14:05:44.041979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.311 [2024-07-15 14:05:44.041997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.311 [2024-07-15 14:05:44.042027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.311 qpair failed and we were unable to recover it. 00:26:49.312 [2024-07-15 14:05:44.051842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.312 [2024-07-15 14:05:44.051990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.312 [2024-07-15 14:05:44.052015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.312 [2024-07-15 14:05:44.052030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.312 [2024-07-15 14:05:44.052042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.312 [2024-07-15 14:05:44.052071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.312 qpair failed and we were unable to recover it. 00:26:49.312 [2024-07-15 14:05:44.061855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.312 [2024-07-15 14:05:44.061954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.312 [2024-07-15 14:05:44.061978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.312 [2024-07-15 14:05:44.061993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.312 [2024-07-15 14:05:44.062006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.312 [2024-07-15 14:05:44.062050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.312 qpair failed and we were unable to recover it. 00:26:49.312 [2024-07-15 14:05:44.071936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.312 [2024-07-15 14:05:44.072076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.312 [2024-07-15 14:05:44.072099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.312 [2024-07-15 14:05:44.072114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.312 [2024-07-15 14:05:44.072127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.312 [2024-07-15 14:05:44.072157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.312 qpair failed and we were unable to recover it. 00:26:49.312 [2024-07-15 14:05:44.081932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.312 [2024-07-15 14:05:44.082047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.312 [2024-07-15 14:05:44.082071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.312 [2024-07-15 14:05:44.082085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.312 [2024-07-15 14:05:44.082097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.312 [2024-07-15 14:05:44.082126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.312 qpair failed and we were unable to recover it. 00:26:49.312 [2024-07-15 14:05:44.091945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.312 [2024-07-15 14:05:44.092059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.312 [2024-07-15 14:05:44.092084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.312 [2024-07-15 14:05:44.092098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.312 [2024-07-15 14:05:44.092110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.312 [2024-07-15 14:05:44.092139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.312 qpair failed and we were unable to recover it. 00:26:49.312 [2024-07-15 14:05:44.101985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.312 [2024-07-15 14:05:44.102099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.312 [2024-07-15 14:05:44.102123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.312 [2024-07-15 14:05:44.102137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.312 [2024-07-15 14:05:44.102150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.312 [2024-07-15 14:05:44.102178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.312 qpair failed and we were unable to recover it. 00:26:49.312 [2024-07-15 14:05:44.112010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.312 [2024-07-15 14:05:44.112118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.312 [2024-07-15 14:05:44.112142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.312 [2024-07-15 14:05:44.112166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.312 [2024-07-15 14:05:44.112179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.312 [2024-07-15 14:05:44.112207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.312 qpair failed and we were unable to recover it. 00:26:49.312 [2024-07-15 14:05:44.122062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.312 [2024-07-15 14:05:44.122219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.312 [2024-07-15 14:05:44.122246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.312 [2024-07-15 14:05:44.122260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.312 [2024-07-15 14:05:44.122272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.312 [2024-07-15 14:05:44.122301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.312 qpair failed and we were unable to recover it. 00:26:49.312 [2024-07-15 14:05:44.132057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.312 [2024-07-15 14:05:44.132167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.312 [2024-07-15 14:05:44.132191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.312 [2024-07-15 14:05:44.132210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.312 [2024-07-15 14:05:44.132223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.312 [2024-07-15 14:05:44.132251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.312 qpair failed and we were unable to recover it. 00:26:49.312 [2024-07-15 14:05:44.142088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.312 [2024-07-15 14:05:44.142183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.312 [2024-07-15 14:05:44.142207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.312 [2024-07-15 14:05:44.142221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.312 [2024-07-15 14:05:44.142234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.312 [2024-07-15 14:05:44.142267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.312 qpair failed and we were unable to recover it. 00:26:49.572 [2024-07-15 14:05:44.152173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.572 [2024-07-15 14:05:44.152292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.572 [2024-07-15 14:05:44.152327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.572 [2024-07-15 14:05:44.152342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.572 [2024-07-15 14:05:44.152355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.572 [2024-07-15 14:05:44.152384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.572 qpair failed and we were unable to recover it. 00:26:49.572 [2024-07-15 14:05:44.162164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.572 [2024-07-15 14:05:44.162297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.572 [2024-07-15 14:05:44.162323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.572 [2024-07-15 14:05:44.162340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.572 [2024-07-15 14:05:44.162353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.572 [2024-07-15 14:05:44.162382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.572 qpair failed and we were unable to recover it. 00:26:49.572 [2024-07-15 14:05:44.172160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.572 [2024-07-15 14:05:44.172254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.572 [2024-07-15 14:05:44.172278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.572 [2024-07-15 14:05:44.172292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.572 [2024-07-15 14:05:44.172304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.572 [2024-07-15 14:05:44.172333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.572 qpair failed and we were unable to recover it. 00:26:49.572 [2024-07-15 14:05:44.182257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.572 [2024-07-15 14:05:44.182350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.572 [2024-07-15 14:05:44.182373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.572 [2024-07-15 14:05:44.182388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.572 [2024-07-15 14:05:44.182401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.572 [2024-07-15 14:05:44.182429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.572 qpair failed and we were unable to recover it. 00:26:49.572 [2024-07-15 14:05:44.192270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.572 [2024-07-15 14:05:44.192371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.572 [2024-07-15 14:05:44.192397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.572 [2024-07-15 14:05:44.192412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.572 [2024-07-15 14:05:44.192425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.573 [2024-07-15 14:05:44.192454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.573 qpair failed and we were unable to recover it. 00:26:49.573 [2024-07-15 14:05:44.202288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.573 [2024-07-15 14:05:44.202383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.573 [2024-07-15 14:05:44.202407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.573 [2024-07-15 14:05:44.202421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.573 [2024-07-15 14:05:44.202434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.573 [2024-07-15 14:05:44.202462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.573 qpair failed and we were unable to recover it. 00:26:49.573 [2024-07-15 14:05:44.212291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.573 [2024-07-15 14:05:44.212385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.573 [2024-07-15 14:05:44.212408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.573 [2024-07-15 14:05:44.212423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.573 [2024-07-15 14:05:44.212435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.573 [2024-07-15 14:05:44.212464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.573 qpair failed and we were unable to recover it. 00:26:49.573 [2024-07-15 14:05:44.222299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.573 [2024-07-15 14:05:44.222396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.573 [2024-07-15 14:05:44.222424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.573 [2024-07-15 14:05:44.222440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.573 [2024-07-15 14:05:44.222453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.573 [2024-07-15 14:05:44.222481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.573 qpair failed and we were unable to recover it. 00:26:49.573 [2024-07-15 14:05:44.232375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.573 [2024-07-15 14:05:44.232501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.573 [2024-07-15 14:05:44.232525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.573 [2024-07-15 14:05:44.232540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.573 [2024-07-15 14:05:44.232553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.573 [2024-07-15 14:05:44.232581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.573 qpair failed and we were unable to recover it. 00:26:49.573 [2024-07-15 14:05:44.242433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.573 [2024-07-15 14:05:44.242529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.573 [2024-07-15 14:05:44.242553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.573 [2024-07-15 14:05:44.242568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.573 [2024-07-15 14:05:44.242580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.573 [2024-07-15 14:05:44.242609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.573 qpair failed and we were unable to recover it. 00:26:49.573 [2024-07-15 14:05:44.252441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.573 [2024-07-15 14:05:44.252574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.573 [2024-07-15 14:05:44.252599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.573 [2024-07-15 14:05:44.252614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.573 [2024-07-15 14:05:44.252626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.573 [2024-07-15 14:05:44.252654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.573 qpair failed and we were unable to recover it. 00:26:49.573 [2024-07-15 14:05:44.262433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.573 [2024-07-15 14:05:44.262531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.573 [2024-07-15 14:05:44.262555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.573 [2024-07-15 14:05:44.262570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.573 [2024-07-15 14:05:44.262582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.573 [2024-07-15 14:05:44.262616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.573 qpair failed and we were unable to recover it. 00:26:49.573 [2024-07-15 14:05:44.272528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.573 [2024-07-15 14:05:44.272630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.573 [2024-07-15 14:05:44.272654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.573 [2024-07-15 14:05:44.272668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.573 [2024-07-15 14:05:44.272680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.573 [2024-07-15 14:05:44.272710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.573 qpair failed and we were unable to recover it. 00:26:49.573 [2024-07-15 14:05:44.282561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.573 [2024-07-15 14:05:44.282656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.573 [2024-07-15 14:05:44.282680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.573 [2024-07-15 14:05:44.282695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.573 [2024-07-15 14:05:44.282707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.573 [2024-07-15 14:05:44.282762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.573 qpair failed and we were unable to recover it. 00:26:49.573 [2024-07-15 14:05:44.292569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.573 [2024-07-15 14:05:44.292666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.573 [2024-07-15 14:05:44.292689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.573 [2024-07-15 14:05:44.292704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.573 [2024-07-15 14:05:44.292716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.573 [2024-07-15 14:05:44.292771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.573 qpair failed and we were unable to recover it. 00:26:49.573 [2024-07-15 14:05:44.302549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.573 [2024-07-15 14:05:44.302645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.573 [2024-07-15 14:05:44.302669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.573 [2024-07-15 14:05:44.302698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.573 [2024-07-15 14:05:44.302711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.573 [2024-07-15 14:05:44.302749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.573 qpair failed and we were unable to recover it. 00:26:49.573 [2024-07-15 14:05:44.312599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.573 [2024-07-15 14:05:44.312701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.573 [2024-07-15 14:05:44.312753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.573 [2024-07-15 14:05:44.312770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.573 [2024-07-15 14:05:44.312783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.573 [2024-07-15 14:05:44.312814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.573 qpair failed and we were unable to recover it. 00:26:49.573 [2024-07-15 14:05:44.322619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.573 [2024-07-15 14:05:44.322752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.573 [2024-07-15 14:05:44.322777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.573 [2024-07-15 14:05:44.322792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.573 [2024-07-15 14:05:44.322805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.573 [2024-07-15 14:05:44.322835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.573 qpair failed and we were unable to recover it. 00:26:49.573 [2024-07-15 14:05:44.332639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.573 [2024-07-15 14:05:44.332765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.573 [2024-07-15 14:05:44.332789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.573 [2024-07-15 14:05:44.332804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.573 [2024-07-15 14:05:44.332817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.573 [2024-07-15 14:05:44.332848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.574 qpair failed and we were unable to recover it. 00:26:49.574 [2024-07-15 14:05:44.342706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.574 [2024-07-15 14:05:44.342839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.574 [2024-07-15 14:05:44.342865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.574 [2024-07-15 14:05:44.342880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.574 [2024-07-15 14:05:44.342892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.574 [2024-07-15 14:05:44.342922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.574 qpair failed and we were unable to recover it. 00:26:49.574 [2024-07-15 14:05:44.352752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.574 [2024-07-15 14:05:44.352859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.574 [2024-07-15 14:05:44.352884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.574 [2024-07-15 14:05:44.352899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.574 [2024-07-15 14:05:44.352912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.574 [2024-07-15 14:05:44.352951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.574 qpair failed and we were unable to recover it. 00:26:49.574 [2024-07-15 14:05:44.362744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.574 [2024-07-15 14:05:44.362917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.574 [2024-07-15 14:05:44.362942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.574 [2024-07-15 14:05:44.362956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.574 [2024-07-15 14:05:44.362969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.574 [2024-07-15 14:05:44.363000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.574 qpair failed and we were unable to recover it. 00:26:49.574 [2024-07-15 14:05:44.372795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.574 [2024-07-15 14:05:44.372907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.574 [2024-07-15 14:05:44.372932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.574 [2024-07-15 14:05:44.372946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.574 [2024-07-15 14:05:44.372960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.574 [2024-07-15 14:05:44.372990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.574 qpair failed and we were unable to recover it. 00:26:49.574 [2024-07-15 14:05:44.382801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.574 [2024-07-15 14:05:44.382901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.574 [2024-07-15 14:05:44.382927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.574 [2024-07-15 14:05:44.382942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.574 [2024-07-15 14:05:44.382954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.574 [2024-07-15 14:05:44.382985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.574 qpair failed and we were unable to recover it. 00:26:49.574 [2024-07-15 14:05:44.392836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.574 [2024-07-15 14:05:44.392948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.574 [2024-07-15 14:05:44.392973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.574 [2024-07-15 14:05:44.392988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.574 [2024-07-15 14:05:44.393002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.574 [2024-07-15 14:05:44.393047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.574 qpair failed and we were unable to recover it. 00:26:49.574 [2024-07-15 14:05:44.402939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.574 [2024-07-15 14:05:44.403066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.574 [2024-07-15 14:05:44.403090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.574 [2024-07-15 14:05:44.403105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.574 [2024-07-15 14:05:44.403118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.574 [2024-07-15 14:05:44.403146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.574 qpair failed and we were unable to recover it. 00:26:49.833 [2024-07-15 14:05:44.412845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.833 [2024-07-15 14:05:44.412971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.833 [2024-07-15 14:05:44.412999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.833 [2024-07-15 14:05:44.413015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.833 [2024-07-15 14:05:44.413027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.833 [2024-07-15 14:05:44.413057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.833 qpair failed and we were unable to recover it. 00:26:49.833 [2024-07-15 14:05:44.422878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.833 [2024-07-15 14:05:44.422970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.833 [2024-07-15 14:05:44.422994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.833 [2024-07-15 14:05:44.423009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.833 [2024-07-15 14:05:44.423022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.833 [2024-07-15 14:05:44.423052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.833 qpair failed and we were unable to recover it. 00:26:49.833 [2024-07-15 14:05:44.432944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.833 [2024-07-15 14:05:44.433064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.833 [2024-07-15 14:05:44.433090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.833 [2024-07-15 14:05:44.433108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.833 [2024-07-15 14:05:44.433120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.833 [2024-07-15 14:05:44.433149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.833 qpair failed and we were unable to recover it. 00:26:49.833 [2024-07-15 14:05:44.442933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.833 [2024-07-15 14:05:44.443030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.833 [2024-07-15 14:05:44.443054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.834 [2024-07-15 14:05:44.443068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.834 [2024-07-15 14:05:44.443087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.834 [2024-07-15 14:05:44.443117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.834 qpair failed and we were unable to recover it. 00:26:49.834 [2024-07-15 14:05:44.452979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.834 [2024-07-15 14:05:44.453091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.834 [2024-07-15 14:05:44.453122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.834 [2024-07-15 14:05:44.453137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.834 [2024-07-15 14:05:44.453149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.834 [2024-07-15 14:05:44.453178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.834 qpair failed and we were unable to recover it. 00:26:49.834 [2024-07-15 14:05:44.462996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.834 [2024-07-15 14:05:44.463105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.834 [2024-07-15 14:05:44.463129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.834 [2024-07-15 14:05:44.463143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.834 [2024-07-15 14:05:44.463155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.834 [2024-07-15 14:05:44.463184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.834 qpair failed and we were unable to recover it. 00:26:49.834 [2024-07-15 14:05:44.473127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.834 [2024-07-15 14:05:44.473224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.834 [2024-07-15 14:05:44.473248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.834 [2024-07-15 14:05:44.473262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.834 [2024-07-15 14:05:44.473274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.834 [2024-07-15 14:05:44.473303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.834 qpair failed and we were unable to recover it. 00:26:49.834 [2024-07-15 14:05:44.483146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.834 [2024-07-15 14:05:44.483281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.834 [2024-07-15 14:05:44.483307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.834 [2024-07-15 14:05:44.483322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.834 [2024-07-15 14:05:44.483334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.834 [2024-07-15 14:05:44.483362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.834 qpair failed and we were unable to recover it. 00:26:49.834 [2024-07-15 14:05:44.493100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.834 [2024-07-15 14:05:44.493212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.834 [2024-07-15 14:05:44.493238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.834 [2024-07-15 14:05:44.493253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.834 [2024-07-15 14:05:44.493265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.834 [2024-07-15 14:05:44.493293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.834 qpair failed and we were unable to recover it. 00:26:49.834 [2024-07-15 14:05:44.503107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.834 [2024-07-15 14:05:44.503223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.834 [2024-07-15 14:05:44.503248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.834 [2024-07-15 14:05:44.503263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.834 [2024-07-15 14:05:44.503276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.834 [2024-07-15 14:05:44.503305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.834 qpair failed and we were unable to recover it. 00:26:49.834 [2024-07-15 14:05:44.513154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.834 [2024-07-15 14:05:44.513265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.834 [2024-07-15 14:05:44.513291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.834 [2024-07-15 14:05:44.513306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.834 [2024-07-15 14:05:44.513318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.834 [2024-07-15 14:05:44.513347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.834 qpair failed and we were unable to recover it. 00:26:49.834 [2024-07-15 14:05:44.523154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.834 [2024-07-15 14:05:44.523252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.834 [2024-07-15 14:05:44.523276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.834 [2024-07-15 14:05:44.523291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.834 [2024-07-15 14:05:44.523303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.834 [2024-07-15 14:05:44.523332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.834 qpair failed and we were unable to recover it. 00:26:49.834 [2024-07-15 14:05:44.533247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.834 [2024-07-15 14:05:44.533345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.834 [2024-07-15 14:05:44.533369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.834 [2024-07-15 14:05:44.533388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.834 [2024-07-15 14:05:44.533401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.834 [2024-07-15 14:05:44.533430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.834 qpair failed and we were unable to recover it. 00:26:49.834 [2024-07-15 14:05:44.543225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.834 [2024-07-15 14:05:44.543322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.834 [2024-07-15 14:05:44.543345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.834 [2024-07-15 14:05:44.543359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.834 [2024-07-15 14:05:44.543372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.834 [2024-07-15 14:05:44.543401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.834 qpair failed and we were unable to recover it. 00:26:49.834 [2024-07-15 14:05:44.553276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.834 [2024-07-15 14:05:44.553396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.834 [2024-07-15 14:05:44.553421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.834 [2024-07-15 14:05:44.553436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.834 [2024-07-15 14:05:44.553449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.834 [2024-07-15 14:05:44.553479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.834 qpair failed and we were unable to recover it. 00:26:49.834 [2024-07-15 14:05:44.563308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.834 [2024-07-15 14:05:44.563406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.834 [2024-07-15 14:05:44.563429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.834 [2024-07-15 14:05:44.563444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.834 [2024-07-15 14:05:44.563456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.834 [2024-07-15 14:05:44.563485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.834 qpair failed and we were unable to recover it. 00:26:49.834 [2024-07-15 14:05:44.573356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.834 [2024-07-15 14:05:44.573455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.834 [2024-07-15 14:05:44.573479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.834 [2024-07-15 14:05:44.573493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.834 [2024-07-15 14:05:44.573506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.834 [2024-07-15 14:05:44.573534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.834 qpair failed and we were unable to recover it. 00:26:49.834 [2024-07-15 14:05:44.583371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.834 [2024-07-15 14:05:44.583478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.834 [2024-07-15 14:05:44.583504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.834 [2024-07-15 14:05:44.583519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.834 [2024-07-15 14:05:44.583531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.834 [2024-07-15 14:05:44.583560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.835 qpair failed and we were unable to recover it. 00:26:49.835 [2024-07-15 14:05:44.593429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.835 [2024-07-15 14:05:44.593582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.835 [2024-07-15 14:05:44.593608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.835 [2024-07-15 14:05:44.593624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.835 [2024-07-15 14:05:44.593636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.835 [2024-07-15 14:05:44.593673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.835 qpair failed and we were unable to recover it. 00:26:49.835 [2024-07-15 14:05:44.603382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.835 [2024-07-15 14:05:44.603482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.835 [2024-07-15 14:05:44.603507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.835 [2024-07-15 14:05:44.603522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.835 [2024-07-15 14:05:44.603535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.835 [2024-07-15 14:05:44.603565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.835 qpair failed and we were unable to recover it. 00:26:49.835 [2024-07-15 14:05:44.613443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.835 [2024-07-15 14:05:44.613542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.835 [2024-07-15 14:05:44.613568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.835 [2024-07-15 14:05:44.613582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.835 [2024-07-15 14:05:44.613595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.835 [2024-07-15 14:05:44.613627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.835 qpair failed and we were unable to recover it. 00:26:49.835 [2024-07-15 14:05:44.623437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.835 [2024-07-15 14:05:44.623540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.835 [2024-07-15 14:05:44.623570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.835 [2024-07-15 14:05:44.623586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.835 [2024-07-15 14:05:44.623598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.835 [2024-07-15 14:05:44.623627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.835 qpair failed and we were unable to recover it. 00:26:49.835 [2024-07-15 14:05:44.633529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.835 [2024-07-15 14:05:44.633658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.835 [2024-07-15 14:05:44.633684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.835 [2024-07-15 14:05:44.633699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.835 [2024-07-15 14:05:44.633711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.835 [2024-07-15 14:05:44.633776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.835 qpair failed and we were unable to recover it. 00:26:49.835 [2024-07-15 14:05:44.643541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.835 [2024-07-15 14:05:44.643678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.835 [2024-07-15 14:05:44.643703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.835 [2024-07-15 14:05:44.643733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.835 [2024-07-15 14:05:44.643755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.835 [2024-07-15 14:05:44.643787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.835 qpair failed and we were unable to recover it. 00:26:49.835 [2024-07-15 14:05:44.653567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.835 [2024-07-15 14:05:44.653666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.835 [2024-07-15 14:05:44.653690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.835 [2024-07-15 14:05:44.653704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.835 [2024-07-15 14:05:44.653731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.835 [2024-07-15 14:05:44.653770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.835 qpair failed and we were unable to recover it. 00:26:49.835 [2024-07-15 14:05:44.663609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.835 [2024-07-15 14:05:44.663711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.835 [2024-07-15 14:05:44.663736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.835 [2024-07-15 14:05:44.663765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.835 [2024-07-15 14:05:44.663778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:49.835 [2024-07-15 14:05:44.663813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.835 qpair failed and we were unable to recover it. 00:26:50.094 [2024-07-15 14:05:44.673652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.094 [2024-07-15 14:05:44.673792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.094 [2024-07-15 14:05:44.673819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.094 [2024-07-15 14:05:44.673834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.094 [2024-07-15 14:05:44.673846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.094 [2024-07-15 14:05:44.673881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.094 qpair failed and we were unable to recover it. 00:26:50.094 [2024-07-15 14:05:44.683630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.094 [2024-07-15 14:05:44.683761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.094 [2024-07-15 14:05:44.683786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.094 [2024-07-15 14:05:44.683801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.094 [2024-07-15 14:05:44.683814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.094 [2024-07-15 14:05:44.683844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.094 qpair failed and we were unable to recover it. 00:26:50.094 [2024-07-15 14:05:44.693683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.094 [2024-07-15 14:05:44.693813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.094 [2024-07-15 14:05:44.693838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.094 [2024-07-15 14:05:44.693853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.094 [2024-07-15 14:05:44.693866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.094 [2024-07-15 14:05:44.693895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.094 qpair failed and we were unable to recover it. 00:26:50.094 [2024-07-15 14:05:44.703703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.094 [2024-07-15 14:05:44.703838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.094 [2024-07-15 14:05:44.703865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.094 [2024-07-15 14:05:44.703880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.094 [2024-07-15 14:05:44.703892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.094 [2024-07-15 14:05:44.703922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.094 qpair failed and we were unable to recover it. 00:26:50.094 [2024-07-15 14:05:44.713775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.094 [2024-07-15 14:05:44.713885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.094 [2024-07-15 14:05:44.713916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.094 [2024-07-15 14:05:44.713932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.094 [2024-07-15 14:05:44.713945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.094 [2024-07-15 14:05:44.713974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.094 qpair failed and we were unable to recover it. 00:26:50.094 [2024-07-15 14:05:44.723745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.094 [2024-07-15 14:05:44.723899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.094 [2024-07-15 14:05:44.723925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.094 [2024-07-15 14:05:44.723941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.094 [2024-07-15 14:05:44.723953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.094 [2024-07-15 14:05:44.723984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.094 qpair failed and we were unable to recover it. 00:26:50.095 [2024-07-15 14:05:44.733757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.095 [2024-07-15 14:05:44.733894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.095 [2024-07-15 14:05:44.733921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.095 [2024-07-15 14:05:44.733937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.095 [2024-07-15 14:05:44.733949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.095 [2024-07-15 14:05:44.733979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.095 qpair failed and we were unable to recover it. 00:26:50.095 [2024-07-15 14:05:44.743830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.095 [2024-07-15 14:05:44.743951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.095 [2024-07-15 14:05:44.743976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.095 [2024-07-15 14:05:44.743991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.095 [2024-07-15 14:05:44.744004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.095 [2024-07-15 14:05:44.744033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.095 qpair failed and we were unable to recover it. 00:26:50.095 [2024-07-15 14:05:44.753909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.095 [2024-07-15 14:05:44.754032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.095 [2024-07-15 14:05:44.754074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.095 [2024-07-15 14:05:44.754089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.095 [2024-07-15 14:05:44.754102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.095 [2024-07-15 14:05:44.754136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.095 qpair failed and we were unable to recover it. 00:26:50.095 [2024-07-15 14:05:44.763857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.095 [2024-07-15 14:05:44.763966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.095 [2024-07-15 14:05:44.763993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.095 [2024-07-15 14:05:44.764008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.095 [2024-07-15 14:05:44.764021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.095 [2024-07-15 14:05:44.764066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.095 qpair failed and we were unable to recover it. 00:26:50.095 [2024-07-15 14:05:44.773934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.095 [2024-07-15 14:05:44.774061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.095 [2024-07-15 14:05:44.774087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.095 [2024-07-15 14:05:44.774102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.095 [2024-07-15 14:05:44.774115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.095 [2024-07-15 14:05:44.774145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.095 qpair failed and we were unable to recover it. 00:26:50.095 [2024-07-15 14:05:44.783951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.095 [2024-07-15 14:05:44.784061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.095 [2024-07-15 14:05:44.784086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.095 [2024-07-15 14:05:44.784101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.095 [2024-07-15 14:05:44.784114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.095 [2024-07-15 14:05:44.784143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.095 qpair failed and we were unable to recover it. 00:26:50.095 [2024-07-15 14:05:44.793999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.095 [2024-07-15 14:05:44.794121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.095 [2024-07-15 14:05:44.794146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.095 [2024-07-15 14:05:44.794161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.095 [2024-07-15 14:05:44.794173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.095 [2024-07-15 14:05:44.794213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.095 qpair failed and we were unable to recover it. 00:26:50.095 [2024-07-15 14:05:44.804015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.095 [2024-07-15 14:05:44.804159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.095 [2024-07-15 14:05:44.804190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.095 [2024-07-15 14:05:44.804207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.095 [2024-07-15 14:05:44.804220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.095 [2024-07-15 14:05:44.804261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.095 qpair failed and we were unable to recover it. 00:26:50.095 [2024-07-15 14:05:44.814067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.095 [2024-07-15 14:05:44.814166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.095 [2024-07-15 14:05:44.814191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.095 [2024-07-15 14:05:44.814205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.095 [2024-07-15 14:05:44.814218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.095 [2024-07-15 14:05:44.814246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.095 qpair failed and we were unable to recover it. 00:26:50.095 [2024-07-15 14:05:44.824009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.095 [2024-07-15 14:05:44.824122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.095 [2024-07-15 14:05:44.824147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.095 [2024-07-15 14:05:44.824162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.095 [2024-07-15 14:05:44.824174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.095 [2024-07-15 14:05:44.824203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.095 qpair failed and we were unable to recover it. 00:26:50.095 [2024-07-15 14:05:44.834123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.095 [2024-07-15 14:05:44.834267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.095 [2024-07-15 14:05:44.834292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.095 [2024-07-15 14:05:44.834307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.095 [2024-07-15 14:05:44.834320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.095 [2024-07-15 14:05:44.834359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.095 qpair failed and we were unable to recover it. 00:26:50.095 [2024-07-15 14:05:44.844144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.095 [2024-07-15 14:05:44.844243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.095 [2024-07-15 14:05:44.844268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.095 [2024-07-15 14:05:44.844282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.095 [2024-07-15 14:05:44.844300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.095 [2024-07-15 14:05:44.844329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.095 qpair failed and we were unable to recover it. 00:26:50.095 [2024-07-15 14:05:44.854158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.095 [2024-07-15 14:05:44.854301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.095 [2024-07-15 14:05:44.854327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.095 [2024-07-15 14:05:44.854341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.095 [2024-07-15 14:05:44.854354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.095 [2024-07-15 14:05:44.854392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.095 qpair failed and we were unable to recover it. 00:26:50.095 [2024-07-15 14:05:44.864181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.095 [2024-07-15 14:05:44.864274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.095 [2024-07-15 14:05:44.864298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.095 [2024-07-15 14:05:44.864312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.095 [2024-07-15 14:05:44.864325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.095 [2024-07-15 14:05:44.864353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.095 qpair failed and we were unable to recover it. 00:26:50.095 [2024-07-15 14:05:44.874242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.095 [2024-07-15 14:05:44.874343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.096 [2024-07-15 14:05:44.874367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.096 [2024-07-15 14:05:44.874382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.096 [2024-07-15 14:05:44.874393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.096 [2024-07-15 14:05:44.874422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.096 qpair failed and we were unable to recover it. 00:26:50.096 [2024-07-15 14:05:44.884254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.096 [2024-07-15 14:05:44.884356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.096 [2024-07-15 14:05:44.884379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.096 [2024-07-15 14:05:44.884394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.096 [2024-07-15 14:05:44.884406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.096 [2024-07-15 14:05:44.884435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.096 qpair failed and we were unable to recover it. 00:26:50.096 [2024-07-15 14:05:44.894283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.096 [2024-07-15 14:05:44.894385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.096 [2024-07-15 14:05:44.894410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.096 [2024-07-15 14:05:44.894425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.096 [2024-07-15 14:05:44.894437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.096 [2024-07-15 14:05:44.894465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.096 qpair failed and we were unable to recover it. 00:26:50.096 [2024-07-15 14:05:44.904306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.096 [2024-07-15 14:05:44.904403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.096 [2024-07-15 14:05:44.904428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.096 [2024-07-15 14:05:44.904443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.096 [2024-07-15 14:05:44.904455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.096 [2024-07-15 14:05:44.904483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.096 qpair failed and we were unable to recover it. 00:26:50.096 [2024-07-15 14:05:44.914309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.096 [2024-07-15 14:05:44.914414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.096 [2024-07-15 14:05:44.914439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.096 [2024-07-15 14:05:44.914454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.096 [2024-07-15 14:05:44.914466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.096 [2024-07-15 14:05:44.914494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.096 qpair failed and we were unable to recover it. 00:26:50.096 [2024-07-15 14:05:44.924427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.096 [2024-07-15 14:05:44.924528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.096 [2024-07-15 14:05:44.924553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.096 [2024-07-15 14:05:44.924568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.096 [2024-07-15 14:05:44.924580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.096 [2024-07-15 14:05:44.924608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.096 qpair failed and we were unable to recover it. 00:26:50.355 [2024-07-15 14:05:44.934405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.355 [2024-07-15 14:05:44.934528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.355 [2024-07-15 14:05:44.934560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.355 [2024-07-15 14:05:44.934581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.355 [2024-07-15 14:05:44.934595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.355 [2024-07-15 14:05:44.934625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.355 qpair failed and we were unable to recover it. 00:26:50.355 [2024-07-15 14:05:44.944418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.355 [2024-07-15 14:05:44.944521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.355 [2024-07-15 14:05:44.944547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.355 [2024-07-15 14:05:44.944561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.355 [2024-07-15 14:05:44.944574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.355 [2024-07-15 14:05:44.944602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.355 qpair failed and we were unable to recover it. 00:26:50.355 [2024-07-15 14:05:44.954514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.356 [2024-07-15 14:05:44.954631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.356 [2024-07-15 14:05:44.954656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.356 [2024-07-15 14:05:44.954671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.356 [2024-07-15 14:05:44.954683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.356 [2024-07-15 14:05:44.954711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.356 qpair failed and we were unable to recover it. 00:26:50.356 [2024-07-15 14:05:44.964421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.356 [2024-07-15 14:05:44.964527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.356 [2024-07-15 14:05:44.964552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.356 [2024-07-15 14:05:44.964567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.356 [2024-07-15 14:05:44.964579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.356 [2024-07-15 14:05:44.964608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.356 qpair failed and we were unable to recover it. 00:26:50.356 [2024-07-15 14:05:44.974469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.356 [2024-07-15 14:05:44.974568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.356 [2024-07-15 14:05:44.974593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.356 [2024-07-15 14:05:44.974608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.356 [2024-07-15 14:05:44.974620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.356 [2024-07-15 14:05:44.974648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.356 qpair failed and we were unable to recover it. 00:26:50.356 [2024-07-15 14:05:44.984528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.356 [2024-07-15 14:05:44.984627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.356 [2024-07-15 14:05:44.984653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.356 [2024-07-15 14:05:44.984668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.356 [2024-07-15 14:05:44.984680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.356 [2024-07-15 14:05:44.984709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.356 qpair failed and we were unable to recover it. 00:26:50.356 [2024-07-15 14:05:44.994521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.356 [2024-07-15 14:05:44.994626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.356 [2024-07-15 14:05:44.994651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.356 [2024-07-15 14:05:44.994665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.356 [2024-07-15 14:05:44.994677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.356 [2024-07-15 14:05:44.994706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.356 qpair failed and we were unable to recover it. 00:26:50.356 [2024-07-15 14:05:45.004592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.356 [2024-07-15 14:05:45.004693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.356 [2024-07-15 14:05:45.004733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.356 [2024-07-15 14:05:45.004757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.356 [2024-07-15 14:05:45.004770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.356 [2024-07-15 14:05:45.004802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.356 qpair failed and we were unable to recover it. 00:26:50.356 [2024-07-15 14:05:45.014627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.356 [2024-07-15 14:05:45.014762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.356 [2024-07-15 14:05:45.014789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.356 [2024-07-15 14:05:45.014804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.356 [2024-07-15 14:05:45.014817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.356 [2024-07-15 14:05:45.014859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.356 qpair failed and we were unable to recover it. 00:26:50.356 [2024-07-15 14:05:45.024641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.356 [2024-07-15 14:05:45.024784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.356 [2024-07-15 14:05:45.024810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.356 [2024-07-15 14:05:45.024831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.356 [2024-07-15 14:05:45.024844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.356 [2024-07-15 14:05:45.024882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.356 qpair failed and we were unable to recover it. 00:26:50.356 [2024-07-15 14:05:45.034681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.356 [2024-07-15 14:05:45.034815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.356 [2024-07-15 14:05:45.034841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.356 [2024-07-15 14:05:45.034857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.356 [2024-07-15 14:05:45.034869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.356 [2024-07-15 14:05:45.034899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.356 qpair failed and we were unable to recover it. 00:26:50.356 [2024-07-15 14:05:45.044702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.356 [2024-07-15 14:05:45.044833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.356 [2024-07-15 14:05:45.044859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.356 [2024-07-15 14:05:45.044875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.356 [2024-07-15 14:05:45.044887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.356 [2024-07-15 14:05:45.044917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.356 qpair failed and we were unable to recover it. 00:26:50.356 [2024-07-15 14:05:45.054769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.356 [2024-07-15 14:05:45.054875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.356 [2024-07-15 14:05:45.054901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.356 [2024-07-15 14:05:45.054916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.356 [2024-07-15 14:05:45.054929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.356 [2024-07-15 14:05:45.054959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.356 qpair failed and we were unable to recover it. 00:26:50.356 [2024-07-15 14:05:45.064706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.356 [2024-07-15 14:05:45.064835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.356 [2024-07-15 14:05:45.064859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.356 [2024-07-15 14:05:45.064873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.356 [2024-07-15 14:05:45.064886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.356 [2024-07-15 14:05:45.064916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.356 qpair failed and we were unable to recover it. 00:26:50.356 [2024-07-15 14:05:45.074813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.356 [2024-07-15 14:05:45.074922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.356 [2024-07-15 14:05:45.074949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.356 [2024-07-15 14:05:45.074964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.356 [2024-07-15 14:05:45.074977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.356 [2024-07-15 14:05:45.075007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.356 qpair failed and we were unable to recover it. 00:26:50.356 [2024-07-15 14:05:45.084888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.356 [2024-07-15 14:05:45.085020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.356 [2024-07-15 14:05:45.085061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.356 [2024-07-15 14:05:45.085076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.356 [2024-07-15 14:05:45.085089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.356 [2024-07-15 14:05:45.085118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.356 qpair failed and we were unable to recover it. 00:26:50.356 [2024-07-15 14:05:45.094929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.357 [2024-07-15 14:05:45.095034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.357 [2024-07-15 14:05:45.095081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.357 [2024-07-15 14:05:45.095095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.357 [2024-07-15 14:05:45.095108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.357 [2024-07-15 14:05:45.095136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.357 qpair failed and we were unable to recover it. 00:26:50.357 [2024-07-15 14:05:45.104890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.357 [2024-07-15 14:05:45.105007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.357 [2024-07-15 14:05:45.105033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.357 [2024-07-15 14:05:45.105048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.357 [2024-07-15 14:05:45.105061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.357 [2024-07-15 14:05:45.105106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.357 qpair failed and we were unable to recover it. 00:26:50.357 [2024-07-15 14:05:45.114876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.357 [2024-07-15 14:05:45.114979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.357 [2024-07-15 14:05:45.115014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.357 [2024-07-15 14:05:45.115049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.357 [2024-07-15 14:05:45.115062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.357 [2024-07-15 14:05:45.115091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.357 qpair failed and we were unable to recover it. 00:26:50.357 [2024-07-15 14:05:45.125001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.357 [2024-07-15 14:05:45.125125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.357 [2024-07-15 14:05:45.125148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.357 [2024-07-15 14:05:45.125163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.357 [2024-07-15 14:05:45.125175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.357 [2024-07-15 14:05:45.125204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.357 qpair failed and we were unable to recover it. 00:26:50.357 [2024-07-15 14:05:45.134930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.357 [2024-07-15 14:05:45.135041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.357 [2024-07-15 14:05:45.135066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.357 [2024-07-15 14:05:45.135081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.357 [2024-07-15 14:05:45.135093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.357 [2024-07-15 14:05:45.135122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.357 qpair failed and we were unable to recover it. 00:26:50.357 [2024-07-15 14:05:45.144996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.357 [2024-07-15 14:05:45.145115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.357 [2024-07-15 14:05:45.145141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.357 [2024-07-15 14:05:45.145155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.357 [2024-07-15 14:05:45.145167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.357 [2024-07-15 14:05:45.145196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.357 qpair failed and we were unable to recover it. 00:26:50.357 [2024-07-15 14:05:45.155058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.357 [2024-07-15 14:05:45.155215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.357 [2024-07-15 14:05:45.155241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.357 [2024-07-15 14:05:45.155265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.357 [2024-07-15 14:05:45.155278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.357 [2024-07-15 14:05:45.155313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.357 qpair failed and we were unable to recover it. 00:26:50.357 [2024-07-15 14:05:45.165116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.357 [2024-07-15 14:05:45.165218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.357 [2024-07-15 14:05:45.165252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.357 [2024-07-15 14:05:45.165267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.357 [2024-07-15 14:05:45.165279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.357 [2024-07-15 14:05:45.165308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.357 qpair failed and we were unable to recover it. 00:26:50.357 [2024-07-15 14:05:45.175080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.357 [2024-07-15 14:05:45.175179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.357 [2024-07-15 14:05:45.175204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.357 [2024-07-15 14:05:45.175219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.357 [2024-07-15 14:05:45.175231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.357 [2024-07-15 14:05:45.175259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.357 qpair failed and we were unable to recover it. 00:26:50.357 [2024-07-15 14:05:45.185103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.357 [2024-07-15 14:05:45.185202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.357 [2024-07-15 14:05:45.185228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.357 [2024-07-15 14:05:45.185243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.357 [2024-07-15 14:05:45.185255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.357 [2024-07-15 14:05:45.185284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.357 qpair failed and we were unable to recover it. 00:26:50.357 [2024-07-15 14:05:45.195201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.617 [2024-07-15 14:05:45.195346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.617 [2024-07-15 14:05:45.195373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.617 [2024-07-15 14:05:45.195388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.617 [2024-07-15 14:05:45.195401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.617 [2024-07-15 14:05:45.195430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.617 qpair failed and we were unable to recover it. 00:26:50.617 [2024-07-15 14:05:45.205115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.617 [2024-07-15 14:05:45.205219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.617 [2024-07-15 14:05:45.205249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.617 [2024-07-15 14:05:45.205265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.617 [2024-07-15 14:05:45.205277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.617 [2024-07-15 14:05:45.205306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.617 qpair failed and we were unable to recover it. 00:26:50.617 [2024-07-15 14:05:45.215178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.617 [2024-07-15 14:05:45.215285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.617 [2024-07-15 14:05:45.215310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.617 [2024-07-15 14:05:45.215325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.617 [2024-07-15 14:05:45.215337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.617 [2024-07-15 14:05:45.215365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.617 qpair failed and we were unable to recover it. 00:26:50.617 [2024-07-15 14:05:45.225207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.617 [2024-07-15 14:05:45.225316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.617 [2024-07-15 14:05:45.225341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.617 [2024-07-15 14:05:45.225356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.617 [2024-07-15 14:05:45.225368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.617 [2024-07-15 14:05:45.225396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.617 qpair failed and we were unable to recover it. 00:26:50.617 [2024-07-15 14:05:45.235223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.617 [2024-07-15 14:05:45.235341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.617 [2024-07-15 14:05:45.235367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.617 [2024-07-15 14:05:45.235382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.617 [2024-07-15 14:05:45.235394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.617 [2024-07-15 14:05:45.235422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.617 qpair failed and we were unable to recover it. 00:26:50.617 [2024-07-15 14:05:45.245272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.617 [2024-07-15 14:05:45.245376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.617 [2024-07-15 14:05:45.245402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.617 [2024-07-15 14:05:45.245417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.617 [2024-07-15 14:05:45.245434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.617 [2024-07-15 14:05:45.245463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.617 qpair failed and we were unable to recover it. 00:26:50.617 [2024-07-15 14:05:45.255299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.617 [2024-07-15 14:05:45.255400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.617 [2024-07-15 14:05:45.255424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.617 [2024-07-15 14:05:45.255439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.617 [2024-07-15 14:05:45.255451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.617 [2024-07-15 14:05:45.255479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.617 qpair failed and we were unable to recover it. 00:26:50.617 [2024-07-15 14:05:45.265320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.617 [2024-07-15 14:05:45.265422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.617 [2024-07-15 14:05:45.265447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.617 [2024-07-15 14:05:45.265462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.617 [2024-07-15 14:05:45.265474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.617 [2024-07-15 14:05:45.265502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.617 qpair failed and we were unable to recover it. 00:26:50.617 [2024-07-15 14:05:45.275319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.618 [2024-07-15 14:05:45.275424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.618 [2024-07-15 14:05:45.275449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.618 [2024-07-15 14:05:45.275464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.618 [2024-07-15 14:05:45.275476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.618 [2024-07-15 14:05:45.275504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.618 qpair failed and we were unable to recover it. 00:26:50.618 [2024-07-15 14:05:45.285388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.618 [2024-07-15 14:05:45.285480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.618 [2024-07-15 14:05:45.285505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.618 [2024-07-15 14:05:45.285520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.618 [2024-07-15 14:05:45.285532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.618 [2024-07-15 14:05:45.285561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.618 qpair failed and we were unable to recover it. 00:26:50.618 [2024-07-15 14:05:45.295379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.618 [2024-07-15 14:05:45.295475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.618 [2024-07-15 14:05:45.295499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.618 [2024-07-15 14:05:45.295515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.618 [2024-07-15 14:05:45.295528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.618 [2024-07-15 14:05:45.295557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.618 qpair failed and we were unable to recover it. 00:26:50.618 [2024-07-15 14:05:45.305441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.618 [2024-07-15 14:05:45.305537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.618 [2024-07-15 14:05:45.305576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.618 [2024-07-15 14:05:45.305591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.618 [2024-07-15 14:05:45.305603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.618 [2024-07-15 14:05:45.305633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.618 qpair failed and we were unable to recover it. 00:26:50.618 [2024-07-15 14:05:45.315505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.618 [2024-07-15 14:05:45.315633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.618 [2024-07-15 14:05:45.315658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.618 [2024-07-15 14:05:45.315673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.618 [2024-07-15 14:05:45.315685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.618 [2024-07-15 14:05:45.315728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.618 qpair failed and we were unable to recover it. 00:26:50.618 [2024-07-15 14:05:45.325505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.618 [2024-07-15 14:05:45.325600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.618 [2024-07-15 14:05:45.325625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.618 [2024-07-15 14:05:45.325640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.618 [2024-07-15 14:05:45.325652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.618 [2024-07-15 14:05:45.325681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.618 qpair failed and we were unable to recover it. 00:26:50.618 [2024-07-15 14:05:45.335542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.618 [2024-07-15 14:05:45.335638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.618 [2024-07-15 14:05:45.335662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.618 [2024-07-15 14:05:45.335681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.618 [2024-07-15 14:05:45.335694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.618 [2024-07-15 14:05:45.335747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.618 qpair failed and we were unable to recover it. 00:26:50.618 [2024-07-15 14:05:45.345548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.618 [2024-07-15 14:05:45.345644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.618 [2024-07-15 14:05:45.345683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.618 [2024-07-15 14:05:45.345699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.618 [2024-07-15 14:05:45.345711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.618 [2024-07-15 14:05:45.345747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.618 qpair failed and we were unable to recover it. 00:26:50.618 [2024-07-15 14:05:45.355595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.618 [2024-07-15 14:05:45.355695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.618 [2024-07-15 14:05:45.355734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.618 [2024-07-15 14:05:45.355760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.618 [2024-07-15 14:05:45.355774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.618 [2024-07-15 14:05:45.355804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.618 qpair failed and we were unable to recover it. 00:26:50.618 [2024-07-15 14:05:45.365614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.618 [2024-07-15 14:05:45.365727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.618 [2024-07-15 14:05:45.365767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.618 [2024-07-15 14:05:45.365785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.618 [2024-07-15 14:05:45.365798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc4000b90 00:26:50.618 [2024-07-15 14:05:45.365830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.618 qpair failed and we were unable to recover it. 00:26:50.618 [2024-07-15 14:05:45.375627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.618 [2024-07-15 14:05:45.375746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.618 [2024-07-15 14:05:45.375781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.618 [2024-07-15 14:05:45.375798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.618 [2024-07-15 14:05:45.375811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:50.618 [2024-07-15 14:05:45.375843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.618 qpair failed and we were unable to recover it. 00:26:50.618 [2024-07-15 14:05:45.385665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.618 [2024-07-15 14:05:45.385783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.618 [2024-07-15 14:05:45.385813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.618 [2024-07-15 14:05:45.385829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.618 [2024-07-15 14:05:45.385841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:50.618 [2024-07-15 14:05:45.385872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.618 qpair failed and we were unable to recover it. 00:26:50.618 [2024-07-15 14:05:45.395765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.618 [2024-07-15 14:05:45.395869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.618 [2024-07-15 14:05:45.395895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.618 [2024-07-15 14:05:45.395910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.618 [2024-07-15 14:05:45.395923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23f9ea0 00:26:50.618 [2024-07-15 14:05:45.395952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:50.618 qpair failed and we were unable to recover it. 00:26:50.618 [2024-07-15 14:05:45.405744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.618 [2024-07-15 14:05:45.405878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.618 [2024-07-15 14:05:45.405910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.618 [2024-07-15 14:05:45.405927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.618 [2024-07-15 14:05:45.405943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dbc000b90 00:26:50.618 [2024-07-15 14:05:45.405975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.618 qpair failed and we were unable to recover it. 00:26:50.618 [2024-07-15 14:05:45.406124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f6ae0 (9): Bad file descriptor 00:26:50.619 [2024-07-15 14:05:45.415765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.619 [2024-07-15 14:05:45.415869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.619 [2024-07-15 14:05:45.415899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.619 [2024-07-15 14:05:45.415915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.619 [2024-07-15 14:05:45.415928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dcc000b90 00:26:50.619 [2024-07-15 14:05:45.415959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:50.619 qpair failed and we were unable to recover it. 00:26:50.619 [2024-07-15 14:05:45.425840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.619 [2024-07-15 14:05:45.425939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.619 [2024-07-15 14:05:45.425970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.619 [2024-07-15 14:05:45.425986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.619 [2024-07-15 14:05:45.425999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dcc000b90 00:26:50.619 [2024-07-15 14:05:45.426029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:50.619 qpair failed and we were unable to recover it. 00:26:50.619 Initializing NVMe Controllers 00:26:50.619 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:50.619 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:50.619 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:50.619 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:50.619 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:50.619 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:50.619 Initialization complete. Launching workers. 00:26:50.619 Starting thread on core 1 00:26:50.619 Starting thread on core 2 00:26:50.619 Starting thread on core 3 00:26:50.619 Starting thread on core 0 00:26:50.619 14:05:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:50.619 00:26:50.619 real 0m10.858s 00:26:50.619 user 0m18.358s 00:26:50.619 sys 0m5.654s 00:26:50.619 14:05:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:50.619 14:05:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:50.619 ************************************ 00:26:50.619 END TEST nvmf_target_disconnect_tc2 00:26:50.619 ************************************ 00:26:50.877 14:05:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:26:50.877 14:05:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:26:50.877 14:05:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:50.877 14:05:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:50.877 14:05:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:50.877 14:05:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:26:50.877 14:05:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:50.877 14:05:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:26:50.877 14:05:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:50.877 14:05:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:50.877 rmmod nvme_tcp 00:26:50.877 rmmod nvme_fabrics 00:26:50.877 rmmod nvme_keyring 00:26:50.877 14:05:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:50.877 14:05:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:26:50.877 14:05:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:26:50.877 14:05:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3862341 ']' 00:26:50.877 14:05:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3862341 00:26:50.877 14:05:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3862341 ']' 00:26:50.877 14:05:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 3862341 00:26:50.877 14:05:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:26:50.877 14:05:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:50.877 14:05:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3862341 00:26:50.877 14:05:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:26:50.877 14:05:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:26:50.877 14:05:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3862341' 00:26:50.877 killing process with pid 3862341 00:26:50.877 14:05:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 3862341 00:26:50.877 14:05:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 3862341 00:26:51.134 14:05:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:51.134 14:05:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:51.134 14:05:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:51.134 14:05:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:51.134 14:05:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:51.134 14:05:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.134 14:05:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:51.134 14:05:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.669 14:05:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:53.669 00:26:53.669 real 0m15.824s 00:26:53.669 user 0m44.955s 00:26:53.669 sys 0m7.690s 00:26:53.669 14:05:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:53.669 14:05:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:53.669 ************************************ 00:26:53.669 END TEST nvmf_target_disconnect 00:26:53.669 ************************************ 00:26:53.669 14:05:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:53.669 14:05:47 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:26:53.669 14:05:47 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:53.669 14:05:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:53.669 14:05:47 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:26:53.669 00:26:53.669 real 19m21.104s 00:26:53.669 user 45m37.615s 00:26:53.669 sys 5m7.280s 00:26:53.669 14:05:47 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:53.669 14:05:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:53.669 ************************************ 00:26:53.669 END TEST nvmf_tcp 00:26:53.669 ************************************ 00:26:53.669 14:05:47 -- common/autotest_common.sh@1142 -- # return 0 00:26:53.669 14:05:47 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:26:53.669 14:05:47 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:53.669 14:05:47 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:53.669 14:05:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:53.669 14:05:47 -- common/autotest_common.sh@10 -- # set +x 00:26:53.669 ************************************ 00:26:53.669 START TEST spdkcli_nvmf_tcp 00:26:53.669 ************************************ 00:26:53.669 14:05:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:53.669 * Looking for test storage... 00:26:53.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:26:53.669 14:05:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:26:53.669 14:05:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:26:53.669 14:05:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3863543 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3863543 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 3863543 ']' 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:53.670 [2024-07-15 14:05:48.111937] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:26:53.670 [2024-07-15 14:05:48.112048] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3863543 ] 00:26:53.670 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.670 [2024-07-15 14:05:48.170352] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:53.670 [2024-07-15 14:05:48.280514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.670 [2024-07-15 14:05:48.280518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:53.670 14:05:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:53.670 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:53.670 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:53.670 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:53.670 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:53.670 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:53.670 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:53.670 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:53.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:53.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:53.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:53.670 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:53.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:53.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:53.670 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:53.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:53.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:53.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:53.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:53.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:53.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:53.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:53.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:53.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:26:53.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:53.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:53.670 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:53.670 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:53.670 ' 00:26:56.205 [2024-07-15 14:05:50.981151] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:57.581 [2024-07-15 14:05:52.201401] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:00.116 [2024-07-15 14:05:54.464399] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:02.018 [2024-07-15 14:05:56.414379] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:03.396 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:03.396 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:03.396 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:03.396 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:03.396 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:03.396 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:03.396 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:03.396 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:03.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:03.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:03.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:03.396 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:03.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:03.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:03.396 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:03.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:03.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:03.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:03.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:03.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:03.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:03.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:03.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:03.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:03.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:03.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:03.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:03.396 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:03.396 14:05:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:03.396 14:05:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:03.396 14:05:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:03.396 14:05:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:03.396 14:05:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:03.396 14:05:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:03.396 14:05:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:27:03.396 14:05:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:03.654 14:05:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:03.911 14:05:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:03.911 14:05:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:03.911 14:05:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:03.911 14:05:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:03.911 14:05:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:03.911 14:05:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:03.911 14:05:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:03.911 14:05:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:03.911 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:03.911 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:03.911 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:03.911 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:03.911 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:03.911 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:03.911 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:03.911 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:03.911 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:03.911 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:03.911 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:03.911 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:03.911 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:03.911 ' 00:27:09.215 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:09.215 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:09.215 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:09.215 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:09.215 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:09.215 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:09.215 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:09.215 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:09.215 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:09.215 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:09.215 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:09.215 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:09.215 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:09.215 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:09.215 14:06:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:09.215 14:06:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:09.215 14:06:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:09.215 14:06:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3863543 00:27:09.215 14:06:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3863543 ']' 00:27:09.215 14:06:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3863543 00:27:09.215 14:06:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:27:09.215 14:06:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:09.215 14:06:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3863543 00:27:09.215 14:06:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:09.215 14:06:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:09.215 14:06:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3863543' 00:27:09.215 killing process with pid 3863543 00:27:09.216 14:06:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 3863543 00:27:09.216 14:06:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 3863543 00:27:09.474 14:06:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:09.474 14:06:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:09.474 14:06:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3863543 ']' 00:27:09.475 14:06:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3863543 00:27:09.475 14:06:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3863543 ']' 00:27:09.475 14:06:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3863543 00:27:09.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3863543) - No such process 00:27:09.475 14:06:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 3863543 is not found' 00:27:09.475 Process with pid 3863543 is not found 00:27:09.475 14:06:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:09.475 14:06:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:09.475 14:06:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:09.475 00:27:09.475 real 0m16.078s 00:27:09.475 user 0m33.950s 00:27:09.475 sys 0m0.797s 00:27:09.475 14:06:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:09.475 14:06:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:09.475 ************************************ 00:27:09.475 END TEST spdkcli_nvmf_tcp 00:27:09.475 ************************************ 00:27:09.475 14:06:04 -- common/autotest_common.sh@1142 -- # return 0 00:27:09.475 14:06:04 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:09.475 14:06:04 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:09.475 14:06:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:09.475 14:06:04 -- common/autotest_common.sh@10 -- # set +x 00:27:09.475 ************************************ 00:27:09.475 START TEST nvmf_identify_passthru 00:27:09.475 ************************************ 00:27:09.475 14:06:04 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:09.475 * Looking for test storage... 00:27:09.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:09.475 14:06:04 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:09.475 14:06:04 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:09.475 14:06:04 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:09.475 14:06:04 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:09.475 14:06:04 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.475 14:06:04 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.475 14:06:04 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.475 14:06:04 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:09.475 14:06:04 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:09.475 14:06:04 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:09.475 14:06:04 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:09.475 14:06:04 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:09.475 14:06:04 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:09.475 14:06:04 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.475 14:06:04 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.475 14:06:04 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.475 14:06:04 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:09.475 14:06:04 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.475 14:06:04 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.475 14:06:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:09.475 14:06:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:09.475 14:06:04 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:27:09.475 14:06:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:12.006 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:12.006 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:27:12.006 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:12.006 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:12.006 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:12.006 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:12.006 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:12.007 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:12.007 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:12.007 Found net devices under 0000:84:00.0: cvl_0_0 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:12.007 Found net devices under 0000:84:00.1: cvl_0_1 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:12.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:12.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:27:12.007 00:27:12.007 --- 10.0.0.2 ping statistics --- 00:27:12.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.007 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:12.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:12.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:27:12.007 00:27:12.007 --- 10.0.0.1 ping statistics --- 00:27:12.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.007 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:12.007 14:06:06 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:12.007 14:06:06 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:12.007 14:06:06 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:12.007 14:06:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:12.007 14:06:06 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:12.007 14:06:06 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:27:12.007 14:06:06 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:27:12.007 14:06:06 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:27:12.007 14:06:06 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:27:12.007 14:06:06 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:27:12.007 14:06:06 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:27:12.007 14:06:06 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:12.007 14:06:06 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:12.007 14:06:06 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:27:12.007 14:06:06 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:27:12.007 14:06:06 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:27:12.007 14:06:06 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:82:00.0 00:27:12.007 14:06:06 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:82:00.0 00:27:12.007 14:06:06 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:82:00.0 ']' 00:27:12.007 14:06:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:27:12.007 14:06:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:12.007 14:06:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:12.007 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.198 14:06:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ9142051K1P0FGN 00:27:16.198 14:06:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:27:16.198 14:06:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:16.198 14:06:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:16.198 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.388 14:06:14 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:27:20.388 14:06:14 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:20.388 14:06:14 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:20.388 14:06:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:20.388 14:06:14 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:20.388 14:06:14 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:20.388 14:06:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:20.388 14:06:14 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3868685 00:27:20.388 14:06:14 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:20.388 14:06:14 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:20.388 14:06:14 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3868685 00:27:20.388 14:06:14 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 3868685 ']' 00:27:20.388 14:06:14 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.388 14:06:14 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:20.388 14:06:14 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:20.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:20.388 14:06:14 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:20.388 14:06:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:20.388 [2024-07-15 14:06:14.908088] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:27:20.388 [2024-07-15 14:06:14.908175] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:20.388 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.388 [2024-07-15 14:06:14.973537] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:20.388 [2024-07-15 14:06:15.085787] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:20.388 [2024-07-15 14:06:15.085853] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:20.388 [2024-07-15 14:06:15.085868] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:20.388 [2024-07-15 14:06:15.085880] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:20.388 [2024-07-15 14:06:15.085889] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:20.388 [2024-07-15 14:06:15.085944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.388 [2024-07-15 14:06:15.085965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:20.388 [2024-07-15 14:06:15.086027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:20.388 [2024-07-15 14:06:15.086029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.388 14:06:15 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:20.388 14:06:15 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:27:20.388 14:06:15 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:20.388 14:06:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.388 14:06:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:20.389 INFO: Log level set to 20 00:27:20.389 INFO: Requests: 00:27:20.389 { 00:27:20.389 "jsonrpc": "2.0", 00:27:20.389 "method": "nvmf_set_config", 00:27:20.389 "id": 1, 00:27:20.389 "params": { 00:27:20.389 "admin_cmd_passthru": { 00:27:20.389 "identify_ctrlr": true 00:27:20.389 } 00:27:20.389 } 00:27:20.389 } 00:27:20.389 00:27:20.389 INFO: response: 00:27:20.389 { 00:27:20.389 "jsonrpc": "2.0", 00:27:20.389 "id": 1, 00:27:20.389 "result": true 00:27:20.389 } 00:27:20.389 00:27:20.389 14:06:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.389 14:06:15 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:20.389 14:06:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.389 14:06:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:20.389 INFO: Setting log level to 20 00:27:20.389 INFO: Setting log level to 20 00:27:20.389 INFO: Log level set to 20 00:27:20.389 INFO: Log level set to 20 00:27:20.389 INFO: Requests: 00:27:20.389 { 00:27:20.389 "jsonrpc": "2.0", 00:27:20.389 "method": "framework_start_init", 00:27:20.389 "id": 1 00:27:20.389 } 00:27:20.389 00:27:20.389 INFO: Requests: 00:27:20.389 { 00:27:20.389 "jsonrpc": "2.0", 00:27:20.389 "method": "framework_start_init", 00:27:20.389 "id": 1 00:27:20.389 } 00:27:20.389 00:27:20.647 [2024-07-15 14:06:15.236956] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:20.647 INFO: response: 00:27:20.647 { 00:27:20.647 "jsonrpc": "2.0", 00:27:20.647 "id": 1, 00:27:20.647 "result": true 00:27:20.647 } 00:27:20.647 00:27:20.647 INFO: response: 00:27:20.647 { 00:27:20.647 "jsonrpc": "2.0", 00:27:20.647 "id": 1, 00:27:20.647 "result": true 00:27:20.647 } 00:27:20.647 00:27:20.647 14:06:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.647 14:06:15 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:20.647 14:06:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.647 14:06:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:20.647 INFO: Setting log level to 40 00:27:20.647 INFO: Setting log level to 40 00:27:20.647 INFO: Setting log level to 40 00:27:20.647 [2024-07-15 14:06:15.246965] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:20.647 14:06:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.647 14:06:15 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:20.647 14:06:15 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:20.647 14:06:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:20.647 14:06:15 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 00:27:20.647 14:06:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.647 14:06:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:23.931 Nvme0n1 00:27:23.931 14:06:18 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.931 14:06:18 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:23.931 14:06:18 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.931 14:06:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:23.931 14:06:18 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.931 14:06:18 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:23.931 14:06:18 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.931 14:06:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:23.931 14:06:18 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.931 14:06:18 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:23.931 14:06:18 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.931 14:06:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:23.931 [2024-07-15 14:06:18.138878] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:23.931 14:06:18 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.931 14:06:18 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:23.931 14:06:18 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.931 14:06:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:23.931 [ 00:27:23.931 { 00:27:23.931 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:23.931 "subtype": "Discovery", 00:27:23.931 "listen_addresses": [], 00:27:23.931 "allow_any_host": true, 00:27:23.931 "hosts": [] 00:27:23.931 }, 00:27:23.931 { 00:27:23.931 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:23.931 "subtype": "NVMe", 00:27:23.931 "listen_addresses": [ 00:27:23.931 { 00:27:23.931 "trtype": "TCP", 00:27:23.931 "adrfam": "IPv4", 00:27:23.931 "traddr": "10.0.0.2", 00:27:23.931 "trsvcid": "4420" 00:27:23.931 } 00:27:23.931 ], 00:27:23.931 "allow_any_host": true, 00:27:23.931 "hosts": [], 00:27:23.931 "serial_number": "SPDK00000000000001", 00:27:23.931 "model_number": "SPDK bdev Controller", 00:27:23.931 "max_namespaces": 1, 00:27:23.931 "min_cntlid": 1, 00:27:23.931 "max_cntlid": 65519, 00:27:23.931 "namespaces": [ 00:27:23.931 { 00:27:23.931 "nsid": 1, 00:27:23.931 "bdev_name": "Nvme0n1", 00:27:23.931 "name": "Nvme0n1", 00:27:23.931 "nguid": "7C1295B65B3742588D93C527EF0C08DF", 00:27:23.931 "uuid": "7c1295b6-5b37-4258-8d93-c527ef0c08df" 00:27:23.931 } 00:27:23.931 ] 00:27:23.931 } 00:27:23.931 ] 00:27:23.931 14:06:18 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.931 14:06:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:23.931 14:06:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:23.931 14:06:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:23.931 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.931 14:06:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ9142051K1P0FGN 00:27:23.931 14:06:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:23.931 14:06:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:23.931 14:06:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:23.931 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.931 14:06:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:27:23.931 14:06:18 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ9142051K1P0FGN '!=' BTLJ9142051K1P0FGN ']' 00:27:23.931 14:06:18 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:27:23.931 14:06:18 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:23.931 14:06:18 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.931 14:06:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:23.931 14:06:18 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.931 14:06:18 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:23.931 14:06:18 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:23.931 14:06:18 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:23.931 14:06:18 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:27:23.931 14:06:18 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:23.931 14:06:18 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:27:23.931 14:06:18 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:23.931 14:06:18 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:23.931 rmmod nvme_tcp 00:27:23.931 rmmod nvme_fabrics 00:27:23.931 rmmod nvme_keyring 00:27:23.931 14:06:18 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:23.931 14:06:18 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:27:23.931 14:06:18 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:27:23.931 14:06:18 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3868685 ']' 00:27:23.931 14:06:18 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3868685 00:27:23.931 14:06:18 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 3868685 ']' 00:27:23.931 14:06:18 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 3868685 00:27:23.931 14:06:18 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:27:23.931 14:06:18 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:23.931 14:06:18 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3868685 00:27:23.931 14:06:18 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:23.931 14:06:18 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:23.931 14:06:18 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3868685' 00:27:23.931 killing process with pid 3868685 00:27:23.931 14:06:18 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 3868685 00:27:23.931 14:06:18 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 3868685 00:27:25.307 14:06:20 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:25.307 14:06:20 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:25.307 14:06:20 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:25.307 14:06:20 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:25.307 14:06:20 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:25.307 14:06:20 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.307 14:06:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:25.307 14:06:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.839 14:06:22 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:27.839 00:27:27.839 real 0m18.030s 00:27:27.839 user 0m26.451s 00:27:27.839 sys 0m2.367s 00:27:27.839 14:06:22 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:27.839 14:06:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:27.839 ************************************ 00:27:27.839 END TEST nvmf_identify_passthru 00:27:27.839 ************************************ 00:27:27.839 14:06:22 -- common/autotest_common.sh@1142 -- # return 0 00:27:27.839 14:06:22 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:27.839 14:06:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:27.839 14:06:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:27.839 14:06:22 -- common/autotest_common.sh@10 -- # set +x 00:27:27.839 ************************************ 00:27:27.839 START TEST nvmf_dif 00:27:27.839 ************************************ 00:27:27.839 14:06:22 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:27.839 * Looking for test storage... 00:27:27.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:27.839 14:06:22 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:27.839 14:06:22 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:27.839 14:06:22 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:27.839 14:06:22 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:27.839 14:06:22 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:27.839 14:06:22 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:27.839 14:06:22 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:27.839 14:06:22 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:27.839 14:06:22 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:27.839 14:06:22 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:27.839 14:06:22 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:27.839 14:06:22 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:27.839 14:06:22 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:27.839 14:06:22 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:27.839 14:06:22 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:27.839 14:06:22 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:27.839 14:06:22 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:27.839 14:06:22 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:27.839 14:06:22 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:27.839 14:06:22 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:27.839 14:06:22 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:27.839 14:06:22 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:27.839 14:06:22 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.839 14:06:22 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.839 14:06:22 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.839 14:06:22 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:27.840 14:06:22 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.840 14:06:22 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:27:27.840 14:06:22 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:27.840 14:06:22 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:27.840 14:06:22 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:27.840 14:06:22 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:27.840 14:06:22 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:27.840 14:06:22 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:27.840 14:06:22 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:27.840 14:06:22 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:27.840 14:06:22 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:27.840 14:06:22 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:27.840 14:06:22 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:27.840 14:06:22 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:27.840 14:06:22 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:27.840 14:06:22 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:27.840 14:06:22 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:27.840 14:06:22 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:27.840 14:06:22 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:27.840 14:06:22 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:27.840 14:06:22 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.840 14:06:22 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:27.840 14:06:22 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.840 14:06:22 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:27.840 14:06:22 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:27.840 14:06:22 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:27:27.840 14:06:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:29.739 14:06:24 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:29.740 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:29.740 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:29.740 Found net devices under 0000:84:00.0: cvl_0_0 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:29.740 Found net devices under 0000:84:00.1: cvl_0_1 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:29.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:29.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:27:29.740 00:27:29.740 --- 10.0.0.2 ping statistics --- 00:27:29.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.740 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:29.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:29.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:27:29.740 00:27:29.740 --- 10.0.0.1 ping statistics --- 00:27:29.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.740 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:29.740 14:06:24 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:31.116 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:31.116 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:31.116 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:31.116 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:31.116 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:31.116 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:31.116 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:31.116 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:31.116 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:31.116 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:31.116 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:31.116 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:31.116 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:31.116 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:31.116 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:31.116 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:31.116 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:31.116 14:06:25 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:31.116 14:06:25 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:31.116 14:06:25 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:31.116 14:06:25 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:31.116 14:06:25 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:31.116 14:06:25 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:31.116 14:06:25 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:31.116 14:06:25 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:31.116 14:06:25 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:31.116 14:06:25 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:31.116 14:06:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:31.116 14:06:25 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3871967 00:27:31.116 14:06:25 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:31.116 14:06:25 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3871967 00:27:31.116 14:06:25 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 3871967 ']' 00:27:31.116 14:06:25 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.116 14:06:25 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:31.116 14:06:25 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.116 14:06:25 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:31.116 14:06:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:31.116 [2024-07-15 14:06:25.836600] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:27:31.116 [2024-07-15 14:06:25.836671] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:31.116 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.116 [2024-07-15 14:06:25.898979] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.379 [2024-07-15 14:06:26.000956] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:31.379 [2024-07-15 14:06:26.001015] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:31.379 [2024-07-15 14:06:26.001028] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:31.379 [2024-07-15 14:06:26.001038] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:31.379 [2024-07-15 14:06:26.001048] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:31.379 [2024-07-15 14:06:26.001080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.379 14:06:26 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:31.379 14:06:26 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:27:31.379 14:06:26 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:31.379 14:06:26 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:31.379 14:06:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:31.379 14:06:26 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:31.379 14:06:26 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:31.379 14:06:26 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:31.379 14:06:26 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.379 14:06:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:31.379 [2024-07-15 14:06:26.135652] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:31.379 14:06:26 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.379 14:06:26 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:31.379 14:06:26 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:31.379 14:06:26 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:31.379 14:06:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:31.379 ************************************ 00:27:31.379 START TEST fio_dif_1_default 00:27:31.379 ************************************ 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:31.379 bdev_null0 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:31.379 [2024-07-15 14:06:26.191970] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:31.379 { 00:27:31.379 "params": { 00:27:31.379 "name": "Nvme$subsystem", 00:27:31.379 "trtype": "$TEST_TRANSPORT", 00:27:31.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.379 "adrfam": "ipv4", 00:27:31.379 "trsvcid": "$NVMF_PORT", 00:27:31.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.379 "hdgst": ${hdgst:-false}, 00:27:31.379 "ddgst": ${ddgst:-false} 00:27:31.379 }, 00:27:31.379 "method": "bdev_nvme_attach_controller" 00:27:31.379 } 00:27:31.379 EOF 00:27:31.379 )") 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:31.379 "params": { 00:27:31.379 "name": "Nvme0", 00:27:31.379 "trtype": "tcp", 00:27:31.379 "traddr": "10.0.0.2", 00:27:31.379 "adrfam": "ipv4", 00:27:31.379 "trsvcid": "4420", 00:27:31.379 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:31.379 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:31.379 "hdgst": false, 00:27:31.379 "ddgst": false 00:27:31.379 }, 00:27:31.379 "method": "bdev_nvme_attach_controller" 00:27:31.379 }' 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:31.379 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:31.639 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:31.639 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:31.639 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:31.639 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:31.639 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:31.639 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:31.639 14:06:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:31.639 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:31.639 fio-3.35 00:27:31.639 Starting 1 thread 00:27:31.639 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.849 00:27:43.849 filename0: (groupid=0, jobs=1): err= 0: pid=3872199: Mon Jul 15 14:06:37 2024 00:27:43.849 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10002msec) 00:27:43.849 slat (nsec): min=3503, max=84247, avg=9695.32, stdev=4836.12 00:27:43.849 clat (usec): min=541, max=45637, avg=21070.57, stdev=20284.18 00:27:43.849 lat (usec): min=549, max=45661, avg=21080.26, stdev=20283.69 00:27:43.849 clat percentiles (usec): 00:27:43.849 | 1.00th=[ 586], 5.00th=[ 611], 10.00th=[ 635], 20.00th=[ 676], 00:27:43.849 | 30.00th=[ 734], 40.00th=[ 775], 50.00th=[41157], 60.00th=[41157], 00:27:43.849 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:27:43.849 | 99.00th=[41681], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:27:43.849 | 99.99th=[45876] 00:27:43.849 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=759.58, stdev=25.78, samples=19 00:27:43.849 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:27:43.849 lat (usec) : 750=33.76%, 1000=16.03% 00:27:43.849 lat (msec) : 50=50.21% 00:27:43.849 cpu : usr=89.54%, sys=10.19%, ctx=19, majf=0, minf=196 00:27:43.849 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:43.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:43.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:43.849 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:43.849 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:43.849 00:27:43.849 Run status group 0 (all jobs): 00:27:43.849 READ: bw=758KiB/s (776kB/s), 758KiB/s-758KiB/s (776kB/s-776kB/s), io=7584KiB (7766kB), run=10002-10002msec 00:27:43.849 14:06:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:43.849 14:06:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:27:43.849 14:06:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:27:43.849 14:06:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:43.849 14:06:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:27:43.849 14:06:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:43.849 14:06:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.849 14:06:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:43.849 14:06:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.849 14:06:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:43.849 14:06:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.849 14:06:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:43.849 14:06:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.849 00:27:43.849 real 0m11.278s 00:27:43.849 user 0m10.322s 00:27:43.849 sys 0m1.338s 00:27:43.849 14:06:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:43.849 14:06:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:43.849 ************************************ 00:27:43.849 END TEST fio_dif_1_default 00:27:43.849 ************************************ 00:27:43.849 14:06:37 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:27:43.849 14:06:37 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:43.849 14:06:37 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:43.849 14:06:37 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:43.850 14:06:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:43.850 ************************************ 00:27:43.850 START TEST fio_dif_1_multi_subsystems 00:27:43.850 ************************************ 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:43.850 bdev_null0 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:43.850 [2024-07-15 14:06:37.525040] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:43.850 bdev_null1 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:43.850 { 00:27:43.850 "params": { 00:27:43.850 "name": "Nvme$subsystem", 00:27:43.850 "trtype": "$TEST_TRANSPORT", 00:27:43.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:43.850 "adrfam": "ipv4", 00:27:43.850 "trsvcid": "$NVMF_PORT", 00:27:43.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:43.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:43.850 "hdgst": ${hdgst:-false}, 00:27:43.850 "ddgst": ${ddgst:-false} 00:27:43.850 }, 00:27:43.850 "method": "bdev_nvme_attach_controller" 00:27:43.850 } 00:27:43.850 EOF 00:27:43.850 )") 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:43.850 { 00:27:43.850 "params": { 00:27:43.850 "name": "Nvme$subsystem", 00:27:43.850 "trtype": "$TEST_TRANSPORT", 00:27:43.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:43.850 "adrfam": "ipv4", 00:27:43.850 "trsvcid": "$NVMF_PORT", 00:27:43.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:43.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:43.850 "hdgst": ${hdgst:-false}, 00:27:43.850 "ddgst": ${ddgst:-false} 00:27:43.850 }, 00:27:43.850 "method": "bdev_nvme_attach_controller" 00:27:43.850 } 00:27:43.850 EOF 00:27:43.850 )") 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:43.850 "params": { 00:27:43.850 "name": "Nvme0", 00:27:43.850 "trtype": "tcp", 00:27:43.850 "traddr": "10.0.0.2", 00:27:43.850 "adrfam": "ipv4", 00:27:43.850 "trsvcid": "4420", 00:27:43.850 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:43.850 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:43.850 "hdgst": false, 00:27:43.850 "ddgst": false 00:27:43.850 }, 00:27:43.850 "method": "bdev_nvme_attach_controller" 00:27:43.850 },{ 00:27:43.850 "params": { 00:27:43.850 "name": "Nvme1", 00:27:43.850 "trtype": "tcp", 00:27:43.850 "traddr": "10.0.0.2", 00:27:43.850 "adrfam": "ipv4", 00:27:43.850 "trsvcid": "4420", 00:27:43.850 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:43.850 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:43.850 "hdgst": false, 00:27:43.850 "ddgst": false 00:27:43.850 }, 00:27:43.850 "method": "bdev_nvme_attach_controller" 00:27:43.850 }' 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:43.850 14:06:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:43.850 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:43.851 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:43.851 fio-3.35 00:27:43.851 Starting 2 threads 00:27:43.851 EAL: No free 2048 kB hugepages reported on node 1 00:27:53.825 00:27:53.825 filename0: (groupid=0, jobs=1): err= 0: pid=3873596: Mon Jul 15 14:06:48 2024 00:27:53.825 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10019msec) 00:27:53.825 slat (nsec): min=4111, max=38632, avg=10876.16, stdev=5587.12 00:27:53.825 clat (usec): min=40734, max=44369, avg=41024.07, stdev=314.31 00:27:53.825 lat (usec): min=40742, max=44382, avg=41034.94, stdev=314.17 00:27:53.825 clat percentiles (usec): 00:27:53.825 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:27:53.825 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:27:53.825 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:27:53.825 | 99.00th=[42730], 99.50th=[42730], 99.90th=[44303], 99.95th=[44303], 00:27:53.825 | 99.99th=[44303] 00:27:53.825 bw ( KiB/s): min= 384, max= 416, per=33.79%, avg=388.80, stdev=11.72, samples=20 00:27:53.825 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:27:53.825 lat (msec) : 50=100.00% 00:27:53.825 cpu : usr=94.32%, sys=5.41%, ctx=12, majf=0, minf=128 00:27:53.825 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:53.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:53.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:53.825 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:53.825 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:53.826 filename1: (groupid=0, jobs=1): err= 0: pid=3873597: Mon Jul 15 14:06:48 2024 00:27:53.826 read: IOPS=189, BW=759KiB/s (777kB/s)(7600KiB/10010msec) 00:27:53.826 slat (nsec): min=3831, max=59704, avg=10422.78, stdev=5312.50 00:27:53.826 clat (usec): min=521, max=44242, avg=21039.44, stdev=20325.30 00:27:53.826 lat (usec): min=529, max=44276, avg=21049.86, stdev=20324.60 00:27:53.826 clat percentiles (usec): 00:27:53.826 | 1.00th=[ 562], 5.00th=[ 578], 10.00th=[ 594], 20.00th=[ 635], 00:27:53.826 | 30.00th=[ 701], 40.00th=[ 750], 50.00th=[41157], 60.00th=[41157], 00:27:53.826 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:27:53.826 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:27:53.826 | 99.99th=[44303] 00:27:53.826 bw ( KiB/s): min= 704, max= 768, per=66.02%, avg=758.40, stdev=21.02, samples=20 00:27:53.826 iops : min= 176, max= 192, avg=189.60, stdev= 5.26, samples=20 00:27:53.826 lat (usec) : 750=39.26%, 1000=10.37% 00:27:53.826 lat (msec) : 2=0.26%, 50=50.11% 00:27:53.826 cpu : usr=94.27%, sys=5.46%, ctx=15, majf=0, minf=129 00:27:53.826 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:53.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:53.826 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:53.826 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:53.826 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:53.826 00:27:53.826 Run status group 0 (all jobs): 00:27:53.826 READ: bw=1148KiB/s (1176kB/s), 390KiB/s-759KiB/s (399kB/s-777kB/s), io=11.2MiB (11.8MB), run=10010-10019msec 00:27:54.103 14:06:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:54.103 14:06:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:27:54.103 14:06:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:54.103 14:06:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:54.103 14:06:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:27:54.103 14:06:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:54.103 14:06:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.103 14:06:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:54.103 14:06:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.103 14:06:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:54.103 14:06:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.103 14:06:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:54.103 14:06:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.103 14:06:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:54.103 14:06:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:54.103 14:06:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:27:54.103 14:06:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:54.103 14:06:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.103 14:06:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:54.103 14:06:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.103 14:06:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:54.103 14:06:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.103 14:06:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:54.103 14:06:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.103 00:27:54.103 real 0m11.407s 00:27:54.103 user 0m20.230s 00:27:54.103 sys 0m1.391s 00:27:54.103 14:06:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:54.103 14:06:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:54.103 ************************************ 00:27:54.103 END TEST fio_dif_1_multi_subsystems 00:27:54.103 ************************************ 00:27:54.103 14:06:48 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:27:54.103 14:06:48 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:54.103 14:06:48 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:54.103 14:06:48 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:54.103 14:06:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:54.377 ************************************ 00:27:54.377 START TEST fio_dif_rand_params 00:27:54.377 ************************************ 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:54.377 bdev_null0 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:54.377 [2024-07-15 14:06:48.972971] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:54.377 14:06:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.377 { 00:27:54.377 "params": { 00:27:54.377 "name": "Nvme$subsystem", 00:27:54.377 "trtype": "$TEST_TRANSPORT", 00:27:54.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.377 "adrfam": "ipv4", 00:27:54.377 "trsvcid": "$NVMF_PORT", 00:27:54.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.377 "hdgst": ${hdgst:-false}, 00:27:54.377 "ddgst": ${ddgst:-false} 00:27:54.377 }, 00:27:54.377 "method": "bdev_nvme_attach_controller" 00:27:54.377 } 00:27:54.377 EOF 00:27:54.377 )") 00:27:54.378 14:06:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:54.378 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:54.378 14:06:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:54.378 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:54.378 14:06:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:54.378 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:54.378 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:54.378 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:54.378 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:27:54.378 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:54.378 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:54.378 14:06:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:54.378 14:06:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:54.378 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:54.378 14:06:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:54.378 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:27:54.378 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:54.378 14:06:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:54.378 14:06:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:54.378 14:06:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:54.378 "params": { 00:27:54.378 "name": "Nvme0", 00:27:54.378 "trtype": "tcp", 00:27:54.378 "traddr": "10.0.0.2", 00:27:54.378 "adrfam": "ipv4", 00:27:54.378 "trsvcid": "4420", 00:27:54.378 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:54.378 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:54.378 "hdgst": false, 00:27:54.378 "ddgst": false 00:27:54.378 }, 00:27:54.378 "method": "bdev_nvme_attach_controller" 00:27:54.378 }' 00:27:54.378 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:54.378 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:54.378 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:54.378 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:54.378 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:54.378 14:06:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:54.378 14:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:54.378 14:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:54.378 14:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:54.378 14:06:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:54.635 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:54.635 ... 00:27:54.635 fio-3.35 00:27:54.635 Starting 3 threads 00:27:54.635 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.183 00:28:01.183 filename0: (groupid=0, jobs=1): err= 0: pid=3874999: Mon Jul 15 14:06:54 2024 00:28:01.183 read: IOPS=215, BW=26.9MiB/s (28.2MB/s)(136MiB/5045msec) 00:28:01.183 slat (nsec): min=4190, max=36859, avg=13830.80, stdev=3844.49 00:28:01.183 clat (usec): min=4554, max=95632, avg=13894.00, stdev=11444.82 00:28:01.184 lat (usec): min=4566, max=95645, avg=13907.83, stdev=11444.68 00:28:01.184 clat percentiles (usec): 00:28:01.184 | 1.00th=[ 5014], 5.00th=[ 5866], 10.00th=[ 7308], 20.00th=[ 8586], 00:28:01.184 | 30.00th=[ 9372], 40.00th=[10552], 50.00th=[11338], 60.00th=[12256], 00:28:01.184 | 70.00th=[13042], 80.00th=[14222], 90.00th=[16319], 95.00th=[48497], 00:28:01.184 | 99.00th=[53740], 99.50th=[89654], 99.90th=[95945], 99.95th=[95945], 00:28:01.184 | 99.99th=[95945] 00:28:01.184 bw ( KiB/s): min=23040, max=32768, per=33.63%, avg=27699.20, stdev=3546.83, samples=10 00:28:01.184 iops : min= 180, max= 256, avg=216.40, stdev=27.71, samples=10 00:28:01.184 lat (msec) : 10=35.48%, 20=57.70%, 50=3.32%, 100=3.50% 00:28:01.184 cpu : usr=89.18%, sys=10.39%, ctx=12, majf=0, minf=104 00:28:01.184 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:01.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.184 issued rwts: total=1085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.184 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:01.184 filename0: (groupid=0, jobs=1): err= 0: pid=3875000: Mon Jul 15 14:06:54 2024 00:28:01.184 read: IOPS=225, BW=28.2MiB/s (29.5MB/s)(142MiB/5048msec) 00:28:01.184 slat (nsec): min=3620, max=37935, avg=13729.94, stdev=4030.82 00:28:01.184 clat (usec): min=4679, max=89100, avg=13260.26, stdev=9866.22 00:28:01.184 lat (usec): min=4691, max=89112, avg=13273.99, stdev=9866.20 00:28:01.184 clat percentiles (usec): 00:28:01.184 | 1.00th=[ 5211], 5.00th=[ 5800], 10.00th=[ 6915], 20.00th=[ 8225], 00:28:01.184 | 30.00th=[ 9241], 40.00th=[10421], 50.00th=[11338], 60.00th=[12125], 00:28:01.184 | 70.00th=[12911], 80.00th=[13960], 90.00th=[15533], 95.00th=[46924], 00:28:01.184 | 99.00th=[51643], 99.50th=[52167], 99.90th=[53216], 99.95th=[88605], 00:28:01.184 | 99.99th=[88605] 00:28:01.184 bw ( KiB/s): min=23296, max=35072, per=35.28%, avg=29056.00, stdev=3905.34, samples=10 00:28:01.184 iops : min= 182, max= 274, avg=227.00, stdev=30.51, samples=10 00:28:01.184 lat (msec) : 10=35.88%, 20=57.87%, 50=3.61%, 100=2.64% 00:28:01.184 cpu : usr=89.12%, sys=10.42%, ctx=11, majf=0, minf=107 00:28:01.184 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:01.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.184 issued rwts: total=1137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.184 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:01.184 filename0: (groupid=0, jobs=1): err= 0: pid=3875001: Mon Jul 15 14:06:54 2024 00:28:01.184 read: IOPS=203, BW=25.5MiB/s (26.7MB/s)(128MiB/5039msec) 00:28:01.184 slat (nsec): min=4161, max=45294, avg=13169.84, stdev=3759.36 00:28:01.184 clat (usec): min=4200, max=57633, avg=14716.47, stdev=11656.39 00:28:01.184 lat (usec): min=4211, max=57642, avg=14729.64, stdev=11656.35 00:28:01.184 clat percentiles (usec): 00:28:01.184 | 1.00th=[ 5014], 5.00th=[ 5932], 10.00th=[ 7963], 20.00th=[ 8979], 00:28:01.184 | 30.00th=[10159], 40.00th=[10945], 50.00th=[11469], 60.00th=[12256], 00:28:01.184 | 70.00th=[13042], 80.00th=[14222], 90.00th=[17171], 95.00th=[50594], 00:28:01.184 | 99.00th=[53740], 99.50th=[55313], 99.90th=[56361], 99.95th=[57410], 00:28:01.184 | 99.99th=[57410] 00:28:01.184 bw ( KiB/s): min=18944, max=30976, per=31.80%, avg=26194.30, stdev=3338.48, samples=10 00:28:01.184 iops : min= 148, max= 242, avg=204.60, stdev=26.06, samples=10 00:28:01.184 lat (msec) : 10=28.85%, 20=61.99%, 50=3.70%, 100=5.46% 00:28:01.184 cpu : usr=89.58%, sys=10.00%, ctx=8, majf=0, minf=137 00:28:01.184 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:01.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.184 issued rwts: total=1026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.184 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:01.184 00:28:01.184 Run status group 0 (all jobs): 00:28:01.184 READ: bw=80.4MiB/s (84.3MB/s), 25.5MiB/s-28.2MiB/s (26.7MB/s-29.5MB/s), io=406MiB (426MB), run=5039-5048msec 00:28:01.184 14:06:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:01.184 14:06:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:01.184 14:06:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:01.184 14:06:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:01.184 14:06:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:01.184 14:06:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:01.184 14:06:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.184 14:06:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:01.184 14:06:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.184 14:06:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:01.184 14:06:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.184 14:06:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:01.184 14:06:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.184 14:06:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:28:01.184 14:06:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:28:01.184 14:06:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:28:01.184 14:06:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:28:01.184 14:06:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:28:01.184 14:06:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:28:01.184 14:06:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:01.184 14:06:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:01.184 14:06:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:01.184 14:06:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:01.184 14:06:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:01.184 14:06:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:01.184 14:06:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.184 14:06:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:01.184 bdev_null0 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:01.184 [2024-07-15 14:06:55.024521] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:01.184 bdev_null1 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:28:01.184 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:01.185 bdev_null2 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:01.185 { 00:28:01.185 "params": { 00:28:01.185 "name": "Nvme$subsystem", 00:28:01.185 "trtype": "$TEST_TRANSPORT", 00:28:01.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.185 "adrfam": "ipv4", 00:28:01.185 "trsvcid": "$NVMF_PORT", 00:28:01.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.185 "hdgst": ${hdgst:-false}, 00:28:01.185 "ddgst": ${ddgst:-false} 00:28:01.185 }, 00:28:01.185 "method": "bdev_nvme_attach_controller" 00:28:01.185 } 00:28:01.185 EOF 00:28:01.185 )") 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:01.185 { 00:28:01.185 "params": { 00:28:01.185 "name": "Nvme$subsystem", 00:28:01.185 "trtype": "$TEST_TRANSPORT", 00:28:01.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.185 "adrfam": "ipv4", 00:28:01.185 "trsvcid": "$NVMF_PORT", 00:28:01.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.185 "hdgst": ${hdgst:-false}, 00:28:01.185 "ddgst": ${ddgst:-false} 00:28:01.185 }, 00:28:01.185 "method": "bdev_nvme_attach_controller" 00:28:01.185 } 00:28:01.185 EOF 00:28:01.185 )") 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:01.185 { 00:28:01.185 "params": { 00:28:01.185 "name": "Nvme$subsystem", 00:28:01.185 "trtype": "$TEST_TRANSPORT", 00:28:01.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.185 "adrfam": "ipv4", 00:28:01.185 "trsvcid": "$NVMF_PORT", 00:28:01.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.185 "hdgst": ${hdgst:-false}, 00:28:01.185 "ddgst": ${ddgst:-false} 00:28:01.185 }, 00:28:01.185 "method": "bdev_nvme_attach_controller" 00:28:01.185 } 00:28:01.185 EOF 00:28:01.185 )") 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:01.185 "params": { 00:28:01.185 "name": "Nvme0", 00:28:01.185 "trtype": "tcp", 00:28:01.185 "traddr": "10.0.0.2", 00:28:01.185 "adrfam": "ipv4", 00:28:01.185 "trsvcid": "4420", 00:28:01.185 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:01.185 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:01.185 "hdgst": false, 00:28:01.185 "ddgst": false 00:28:01.185 }, 00:28:01.185 "method": "bdev_nvme_attach_controller" 00:28:01.185 },{ 00:28:01.185 "params": { 00:28:01.185 "name": "Nvme1", 00:28:01.185 "trtype": "tcp", 00:28:01.185 "traddr": "10.0.0.2", 00:28:01.185 "adrfam": "ipv4", 00:28:01.185 "trsvcid": "4420", 00:28:01.185 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:01.185 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:01.185 "hdgst": false, 00:28:01.185 "ddgst": false 00:28:01.185 }, 00:28:01.185 "method": "bdev_nvme_attach_controller" 00:28:01.185 },{ 00:28:01.185 "params": { 00:28:01.185 "name": "Nvme2", 00:28:01.185 "trtype": "tcp", 00:28:01.185 "traddr": "10.0.0.2", 00:28:01.185 "adrfam": "ipv4", 00:28:01.185 "trsvcid": "4420", 00:28:01.185 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:01.185 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:01.185 "hdgst": false, 00:28:01.185 "ddgst": false 00:28:01.185 }, 00:28:01.185 "method": "bdev_nvme_attach_controller" 00:28:01.185 }' 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:01.185 14:06:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:01.185 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:01.185 ... 00:28:01.185 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:01.185 ... 00:28:01.185 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:01.185 ... 00:28:01.185 fio-3.35 00:28:01.185 Starting 24 threads 00:28:01.185 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.382 00:28:13.382 filename0: (groupid=0, jobs=1): err= 0: pid=3875749: Mon Jul 15 14:07:06 2024 00:28:13.382 read: IOPS=479, BW=1917KiB/s (1963kB/s)(18.8MiB/10017msec) 00:28:13.382 slat (nsec): min=8213, max=74563, avg=21218.59, stdev=11675.90 00:28:13.382 clat (usec): min=24119, max=56525, avg=33219.95, stdev=1782.82 00:28:13.382 lat (usec): min=24129, max=56556, avg=33241.17, stdev=1782.74 00:28:13.382 clat percentiles (usec): 00:28:13.382 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:28:13.382 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:28:13.382 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:28:13.382 | 99.00th=[40633], 99.50th=[42206], 99.90th=[56361], 99.95th=[56361], 00:28:13.382 | 99.99th=[56361] 00:28:13.382 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1913.60, stdev=65.33, samples=20 00:28:13.382 iops : min= 448, max= 512, avg=478.40, stdev=16.33, samples=20 00:28:13.382 lat (msec) : 50=99.67%, 100=0.33% 00:28:13.382 cpu : usr=98.11%, sys=1.48%, ctx=20, majf=0, minf=52 00:28:13.382 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:13.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.382 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.382 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.382 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:13.382 filename0: (groupid=0, jobs=1): err= 0: pid=3875750: Mon Jul 15 14:07:06 2024 00:28:13.382 read: IOPS=485, BW=1941KiB/s (1988kB/s)(19.0MiB/10017msec) 00:28:13.382 slat (usec): min=5, max=204, avg=38.16, stdev=22.94 00:28:13.382 clat (usec): min=7986, max=45057, avg=32653.63, stdev=2599.32 00:28:13.382 lat (usec): min=7998, max=45081, avg=32691.79, stdev=2598.37 00:28:13.382 clat percentiles (usec): 00:28:13.382 | 1.00th=[20841], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:28:13.382 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:28:13.382 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:28:13.382 | 99.00th=[39584], 99.50th=[40109], 99.90th=[42730], 99.95th=[43779], 00:28:13.382 | 99.99th=[44827] 00:28:13.382 bw ( KiB/s): min= 1792, max= 2284, per=4.21%, avg=1938.20, stdev=91.38, samples=20 00:28:13.382 iops : min= 448, max= 571, avg=484.55, stdev=22.84, samples=20 00:28:13.382 lat (msec) : 10=0.29%, 20=0.51%, 50=99.20% 00:28:13.382 cpu : usr=96.84%, sys=2.14%, ctx=59, majf=0, minf=42 00:28:13.382 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.9%, 16=6.5%, 32=0.0%, >=64=0.0% 00:28:13.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.382 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.382 issued rwts: total=4861,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.382 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:13.382 filename0: (groupid=0, jobs=1): err= 0: pid=3875751: Mon Jul 15 14:07:06 2024 00:28:13.382 read: IOPS=479, BW=1917KiB/s (1963kB/s)(18.8MiB/10017msec) 00:28:13.382 slat (nsec): min=10126, max=76791, avg=33552.22, stdev=10015.86 00:28:13.382 clat (usec): min=23877, max=65332, avg=33098.21, stdev=1805.68 00:28:13.382 lat (usec): min=23910, max=65352, avg=33131.76, stdev=1804.94 00:28:13.382 clat percentiles (usec): 00:28:13.382 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:28:13.382 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:28:13.382 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:28:13.382 | 99.00th=[40109], 99.50th=[41681], 99.90th=[56361], 99.95th=[56361], 00:28:13.382 | 99.99th=[65274] 00:28:13.382 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1913.60, stdev=50.44, samples=20 00:28:13.382 iops : min= 448, max= 512, avg=478.40, stdev=12.61, samples=20 00:28:13.382 lat (msec) : 50=99.67%, 100=0.33% 00:28:13.382 cpu : usr=97.88%, sys=1.48%, ctx=32, majf=0, minf=35 00:28:13.382 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:13.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.382 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.382 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.382 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:13.382 filename0: (groupid=0, jobs=1): err= 0: pid=3875752: Mon Jul 15 14:07:06 2024 00:28:13.382 read: IOPS=482, BW=1930KiB/s (1976kB/s)(18.9MiB/10017msec) 00:28:13.382 slat (usec): min=10, max=103, avg=37.69, stdev=13.74 00:28:13.382 clat (usec): min=12164, max=43022, avg=32836.40, stdev=2005.58 00:28:13.382 lat (usec): min=12226, max=43054, avg=32874.09, stdev=2004.49 00:28:13.382 clat percentiles (usec): 00:28:13.382 | 1.00th=[27132], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:28:13.382 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:28:13.382 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:28:13.382 | 99.00th=[39584], 99.50th=[40109], 99.90th=[42730], 99.95th=[42730], 00:28:13.382 | 99.99th=[43254] 00:28:13.382 bw ( KiB/s): min= 1792, max= 2052, per=4.18%, avg=1926.60, stdev=50.95, samples=20 00:28:13.382 iops : min= 448, max= 513, avg=481.65, stdev=12.74, samples=20 00:28:13.382 lat (msec) : 20=0.66%, 50=99.34% 00:28:13.382 cpu : usr=97.51%, sys=1.85%, ctx=35, majf=0, minf=39 00:28:13.382 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:13.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.382 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.382 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.382 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:13.382 filename0: (groupid=0, jobs=1): err= 0: pid=3875753: Mon Jul 15 14:07:06 2024 00:28:13.382 read: IOPS=479, BW=1917KiB/s (1963kB/s)(18.8MiB/10018msec) 00:28:13.382 slat (usec): min=10, max=114, avg=48.22, stdev=19.62 00:28:13.382 clat (usec): min=23544, max=56729, avg=32960.14, stdev=1792.63 00:28:13.382 lat (usec): min=23584, max=56746, avg=33008.36, stdev=1788.67 00:28:13.382 clat percentiles (usec): 00:28:13.383 | 1.00th=[31327], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:28:13.383 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:28:13.383 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:28:13.383 | 99.00th=[40109], 99.50th=[41681], 99.90th=[56886], 99.95th=[56886], 00:28:13.383 | 99.99th=[56886] 00:28:13.383 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1913.60, stdev=50.44, samples=20 00:28:13.383 iops : min= 448, max= 512, avg=478.40, stdev=12.61, samples=20 00:28:13.383 lat (msec) : 50=99.67%, 100=0.33% 00:28:13.383 cpu : usr=97.81%, sys=1.55%, ctx=59, majf=0, minf=36 00:28:13.383 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:13.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.383 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.383 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.383 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:13.383 filename0: (groupid=0, jobs=1): err= 0: pid=3875754: Mon Jul 15 14:07:06 2024 00:28:13.383 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10009msec) 00:28:13.383 slat (usec): min=7, max=111, avg=39.12, stdev=15.47 00:28:13.383 clat (usec): min=12236, max=61180, avg=33012.88, stdev=2304.40 00:28:13.383 lat (usec): min=12258, max=61200, avg=33052.00, stdev=2302.18 00:28:13.383 clat percentiles (usec): 00:28:13.383 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:28:13.383 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:28:13.383 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:28:13.383 | 99.00th=[40109], 99.50th=[43779], 99.90th=[61080], 99.95th=[61080], 00:28:13.383 | 99.99th=[61080] 00:28:13.383 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1913.42, stdev=51.41, samples=19 00:28:13.383 iops : min= 448, max= 512, avg=478.32, stdev=12.95, samples=19 00:28:13.383 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:28:13.383 cpu : usr=94.90%, sys=2.82%, ctx=389, majf=0, minf=24 00:28:13.383 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:13.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.383 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.383 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.383 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:13.383 filename0: (groupid=0, jobs=1): err= 0: pid=3875755: Mon Jul 15 14:07:06 2024 00:28:13.383 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10012msec) 00:28:13.383 slat (usec): min=8, max=117, avg=39.08, stdev=23.34 00:28:13.383 clat (usec): min=12392, max=62475, avg=33091.96, stdev=2345.95 00:28:13.383 lat (usec): min=12435, max=62541, avg=33131.04, stdev=2343.91 00:28:13.383 clat percentiles (usec): 00:28:13.383 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:28:13.383 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:28:13.383 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[34341], 00:28:13.383 | 99.00th=[40633], 99.50th=[43779], 99.90th=[62129], 99.95th=[62653], 00:28:13.383 | 99.99th=[62653] 00:28:13.383 bw ( KiB/s): min= 1776, max= 2032, per=4.15%, avg=1913.26, stdev=50.12, samples=19 00:28:13.383 iops : min= 444, max= 508, avg=478.32, stdev=12.53, samples=19 00:28:13.383 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:28:13.383 cpu : usr=97.61%, sys=1.67%, ctx=53, majf=0, minf=43 00:28:13.383 IO depths : 1=0.2%, 2=6.4%, 4=25.0%, 8=56.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:28:13.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.383 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.383 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.383 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:13.383 filename0: (groupid=0, jobs=1): err= 0: pid=3875756: Mon Jul 15 14:07:06 2024 00:28:13.383 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10006msec) 00:28:13.383 slat (nsec): min=10870, max=76125, avg=34123.78, stdev=10334.11 00:28:13.383 clat (usec): min=12292, max=67185, avg=33034.80, stdev=2547.06 00:28:13.383 lat (usec): min=12321, max=67223, avg=33068.92, stdev=2547.20 00:28:13.383 clat percentiles (usec): 00:28:13.383 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:28:13.383 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:28:13.383 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:28:13.383 | 99.00th=[40109], 99.50th=[41681], 99.90th=[66847], 99.95th=[67634], 00:28:13.383 | 99.99th=[67634] 00:28:13.383 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1913.26, stdev=51.80, samples=19 00:28:13.383 iops : min= 448, max= 512, avg=478.32, stdev=12.95, samples=19 00:28:13.383 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:28:13.383 cpu : usr=96.51%, sys=2.32%, ctx=168, majf=0, minf=43 00:28:13.383 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:13.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.383 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.383 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.383 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:13.383 filename1: (groupid=0, jobs=1): err= 0: pid=3875757: Mon Jul 15 14:07:06 2024 00:28:13.383 read: IOPS=479, BW=1917KiB/s (1963kB/s)(18.8MiB/10017msec) 00:28:13.383 slat (usec): min=9, max=103, avg=39.59, stdev=15.01 00:28:13.383 clat (usec): min=23690, max=56653, avg=33040.71, stdev=1753.57 00:28:13.383 lat (usec): min=23732, max=56672, avg=33080.29, stdev=1753.01 00:28:13.383 clat percentiles (usec): 00:28:13.383 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:28:13.383 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:28:13.383 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:28:13.383 | 99.00th=[40109], 99.50th=[41681], 99.90th=[56361], 99.95th=[56886], 00:28:13.383 | 99.99th=[56886] 00:28:13.383 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1913.60, stdev=50.44, samples=20 00:28:13.383 iops : min= 448, max= 512, avg=478.40, stdev=12.61, samples=20 00:28:13.383 lat (msec) : 50=99.67%, 100=0.33% 00:28:13.383 cpu : usr=97.44%, sys=1.83%, ctx=102, majf=0, minf=32 00:28:13.383 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:13.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.383 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.383 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.383 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:13.383 filename1: (groupid=0, jobs=1): err= 0: pid=3875758: Mon Jul 15 14:07:06 2024 00:28:13.383 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10002msec) 00:28:13.383 slat (usec): min=8, max=119, avg=32.25, stdev=10.85 00:28:13.383 clat (usec): min=30775, max=65703, avg=33152.21, stdev=2128.27 00:28:13.383 lat (usec): min=30846, max=65736, avg=33184.46, stdev=2128.04 00:28:13.383 clat percentiles (usec): 00:28:13.383 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:28:13.383 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:28:13.383 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[34341], 00:28:13.383 | 99.00th=[40109], 99.50th=[42730], 99.90th=[65799], 99.95th=[65799], 00:28:13.383 | 99.99th=[65799] 00:28:13.383 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1913.26, stdev=51.80, samples=19 00:28:13.383 iops : min= 448, max= 512, avg=478.32, stdev=12.95, samples=19 00:28:13.383 lat (msec) : 50=99.67%, 100=0.33% 00:28:13.383 cpu : usr=97.82%, sys=1.54%, ctx=58, majf=0, minf=31 00:28:13.383 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:13.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.383 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.383 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.383 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:13.383 filename1: (groupid=0, jobs=1): err= 0: pid=3875759: Mon Jul 15 14:07:06 2024 00:28:13.383 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10002msec) 00:28:13.383 slat (usec): min=12, max=111, avg=38.02, stdev=15.80 00:28:13.383 clat (usec): min=22276, max=77639, avg=33116.62, stdev=2297.07 00:28:13.383 lat (usec): min=22312, max=77708, avg=33154.64, stdev=2296.06 00:28:13.383 clat percentiles (usec): 00:28:13.383 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:28:13.383 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:28:13.383 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:28:13.383 | 99.00th=[40109], 99.50th=[43779], 99.90th=[66323], 99.95th=[66847], 00:28:13.383 | 99.99th=[78119] 00:28:13.383 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1913.26, stdev=51.80, samples=19 00:28:13.383 iops : min= 448, max= 512, avg=478.32, stdev=12.95, samples=19 00:28:13.383 lat (msec) : 50=99.67%, 100=0.33% 00:28:13.383 cpu : usr=96.59%, sys=2.16%, ctx=121, majf=0, minf=48 00:28:13.383 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:13.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.383 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.383 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.383 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:13.383 filename1: (groupid=0, jobs=1): err= 0: pid=3875760: Mon Jul 15 14:07:06 2024 00:28:13.383 read: IOPS=482, BW=1930KiB/s (1976kB/s)(18.9MiB/10016msec) 00:28:13.383 slat (nsec): min=9579, max=75141, avg=32293.04, stdev=8624.27 00:28:13.383 clat (usec): min=11249, max=43078, avg=32868.71, stdev=2058.13 00:28:13.383 lat (usec): min=11303, max=43114, avg=32901.00, stdev=2057.63 00:28:13.383 clat percentiles (usec): 00:28:13.383 | 1.00th=[27395], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:28:13.383 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:28:13.383 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:28:13.383 | 99.00th=[39584], 99.50th=[40109], 99.90th=[42730], 99.95th=[43254], 00:28:13.383 | 99.99th=[43254] 00:28:13.383 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1926.40, stdev=50.44, samples=20 00:28:13.383 iops : min= 448, max= 512, avg=481.60, stdev=12.61, samples=20 00:28:13.383 lat (msec) : 20=0.66%, 50=99.34% 00:28:13.383 cpu : usr=97.04%, sys=2.06%, ctx=77, majf=0, minf=40 00:28:13.383 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:13.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.383 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.383 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.383 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:13.383 filename1: (groupid=0, jobs=1): err= 0: pid=3875761: Mon Jul 15 14:07:06 2024 00:28:13.383 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10007msec) 00:28:13.383 slat (usec): min=8, max=140, avg=53.78, stdev=25.99 00:28:13.383 clat (usec): min=11569, max=59230, avg=32869.81, stdev=2198.78 00:28:13.383 lat (usec): min=11587, max=59285, avg=32923.59, stdev=2197.99 00:28:13.383 clat percentiles (usec): 00:28:13.384 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:28:13.384 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:28:13.384 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:28:13.384 | 99.00th=[39584], 99.50th=[43254], 99.90th=[58983], 99.95th=[58983], 00:28:13.384 | 99.99th=[58983] 00:28:13.384 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1913.26, stdev=51.80, samples=19 00:28:13.384 iops : min= 448, max= 512, avg=478.32, stdev=12.95, samples=19 00:28:13.384 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:28:13.384 cpu : usr=95.50%, sys=2.66%, ctx=282, majf=0, minf=37 00:28:13.384 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:13.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.384 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.384 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.384 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:13.384 filename1: (groupid=0, jobs=1): err= 0: pid=3875762: Mon Jul 15 14:07:06 2024 00:28:13.384 read: IOPS=479, BW=1917KiB/s (1963kB/s)(18.8MiB/10017msec) 00:28:13.384 slat (usec): min=10, max=113, avg=36.80, stdev=15.28 00:28:13.384 clat (usec): min=23700, max=56214, avg=33068.74, stdev=1730.02 00:28:13.384 lat (usec): min=23749, max=56241, avg=33105.54, stdev=1729.87 00:28:13.384 clat percentiles (usec): 00:28:13.384 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:28:13.384 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:28:13.384 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:28:13.384 | 99.00th=[40109], 99.50th=[41681], 99.90th=[56361], 99.95th=[56361], 00:28:13.384 | 99.99th=[56361] 00:28:13.384 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1913.60, stdev=50.44, samples=20 00:28:13.384 iops : min= 448, max= 512, avg=478.40, stdev=12.61, samples=20 00:28:13.384 lat (msec) : 50=99.67%, 100=0.33% 00:28:13.384 cpu : usr=94.94%, sys=2.82%, ctx=192, majf=0, minf=38 00:28:13.384 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:13.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.384 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.384 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.384 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:13.384 filename1: (groupid=0, jobs=1): err= 0: pid=3875763: Mon Jul 15 14:07:06 2024 00:28:13.384 read: IOPS=480, BW=1921KiB/s (1967kB/s)(18.8MiB/10006msec) 00:28:13.384 slat (usec): min=9, max=215, avg=38.81, stdev=15.72 00:28:13.384 clat (usec): min=12236, max=67141, avg=32962.44, stdev=3067.74 00:28:13.384 lat (usec): min=12251, max=67168, avg=33001.25, stdev=3066.33 00:28:13.384 clat percentiles (usec): 00:28:13.384 | 1.00th=[22938], 5.00th=[31851], 10.00th=[32375], 20.00th=[32637], 00:28:13.384 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:28:13.384 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:28:13.384 | 99.00th=[42206], 99.50th=[49546], 99.90th=[66847], 99.95th=[66847], 00:28:13.384 | 99.99th=[67634] 00:28:13.384 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1915.95, stdev=46.18, samples=19 00:28:13.384 iops : min= 448, max= 512, avg=478.95, stdev=11.61, samples=19 00:28:13.384 lat (msec) : 20=0.33%, 50=99.27%, 100=0.40% 00:28:13.384 cpu : usr=94.62%, sys=2.94%, ctx=220, majf=0, minf=32 00:28:13.384 IO depths : 1=6.0%, 2=12.0%, 4=24.4%, 8=51.1%, 16=6.6%, 32=0.0%, >=64=0.0% 00:28:13.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.384 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.384 issued rwts: total=4806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.384 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:13.384 filename1: (groupid=0, jobs=1): err= 0: pid=3875764: Mon Jul 15 14:07:06 2024 00:28:13.384 read: IOPS=482, BW=1929KiB/s (1976kB/s)(18.9MiB/10018msec) 00:28:13.384 slat (usec): min=7, max=115, avg=34.65, stdev=18.47 00:28:13.384 clat (usec): min=7812, max=42986, avg=32883.27, stdev=1999.88 00:28:13.384 lat (usec): min=7850, max=43009, avg=32917.92, stdev=1997.76 00:28:13.384 clat percentiles (usec): 00:28:13.384 | 1.00th=[28181], 5.00th=[31851], 10.00th=[32375], 20.00th=[32637], 00:28:13.384 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:28:13.384 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:28:13.384 | 99.00th=[40109], 99.50th=[40109], 99.90th=[42730], 99.95th=[42730], 00:28:13.384 | 99.99th=[42730] 00:28:13.384 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1926.40, stdev=50.44, samples=20 00:28:13.384 iops : min= 448, max= 512, avg=481.60, stdev=12.61, samples=20 00:28:13.384 lat (msec) : 10=0.14%, 20=0.56%, 50=99.30% 00:28:13.384 cpu : usr=97.88%, sys=1.60%, ctx=34, majf=0, minf=57 00:28:13.384 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:13.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.384 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.384 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.384 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:13.384 filename2: (groupid=0, jobs=1): err= 0: pid=3875765: Mon Jul 15 14:07:06 2024 00:28:13.384 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10011msec) 00:28:13.384 slat (usec): min=3, max=125, avg=28.33, stdev=18.51 00:28:13.384 clat (usec): min=22509, max=86670, avg=33245.80, stdev=2816.25 00:28:13.384 lat (usec): min=22523, max=86688, avg=33274.14, stdev=2814.45 00:28:13.384 clat percentiles (usec): 00:28:13.384 | 1.00th=[31589], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:28:13.384 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:28:13.384 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:28:13.384 | 99.00th=[40633], 99.50th=[43779], 99.90th=[76022], 99.95th=[76022], 00:28:13.384 | 99.99th=[86508] 00:28:13.384 bw ( KiB/s): min= 1664, max= 2048, per=4.14%, avg=1906.53, stdev=72.59, samples=19 00:28:13.384 iops : min= 416, max= 512, avg=476.63, stdev=18.15, samples=19 00:28:13.384 lat (msec) : 50=99.67%, 100=0.33% 00:28:13.384 cpu : usr=97.91%, sys=1.69%, ctx=9, majf=0, minf=52 00:28:13.384 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:13.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.384 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.384 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.384 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:13.384 filename2: (groupid=0, jobs=1): err= 0: pid=3875766: Mon Jul 15 14:07:06 2024 00:28:13.384 read: IOPS=485, BW=1942KiB/s (1988kB/s)(19.0MiB/10020msec) 00:28:13.384 slat (usec): min=5, max=121, avg=40.04, stdev=16.33 00:28:13.384 clat (usec): min=1764, max=43074, avg=32608.60, stdev=3214.96 00:28:13.384 lat (usec): min=1775, max=43097, avg=32648.64, stdev=3215.43 00:28:13.384 clat percentiles (usec): 00:28:13.384 | 1.00th=[11600], 5.00th=[31851], 10.00th=[32375], 20.00th=[32637], 00:28:13.384 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:28:13.384 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:28:13.384 | 99.00th=[39584], 99.50th=[40109], 99.90th=[42730], 99.95th=[43254], 00:28:13.384 | 99.99th=[43254] 00:28:13.384 bw ( KiB/s): min= 1792, max= 2304, per=4.21%, avg=1939.20, stdev=95.38, samples=20 00:28:13.384 iops : min= 448, max= 576, avg=484.80, stdev=23.85, samples=20 00:28:13.384 lat (msec) : 2=0.33%, 4=0.33%, 10=0.21%, 20=0.45%, 50=98.68% 00:28:13.384 cpu : usr=98.11%, sys=1.43%, ctx=45, majf=0, minf=36 00:28:13.384 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:13.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.384 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.384 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.384 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:13.384 filename2: (groupid=0, jobs=1): err= 0: pid=3875767: Mon Jul 15 14:07:06 2024 00:28:13.384 read: IOPS=479, BW=1917KiB/s (1963kB/s)(18.8MiB/10017msec) 00:28:13.384 slat (usec): min=8, max=147, avg=55.14, stdev=24.62 00:28:13.384 clat (usec): min=22491, max=65204, avg=32885.52, stdev=1831.57 00:28:13.384 lat (usec): min=22571, max=65222, avg=32940.66, stdev=1829.66 00:28:13.384 clat percentiles (usec): 00:28:13.384 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:28:13.384 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:28:13.384 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:28:13.384 | 99.00th=[39584], 99.50th=[41681], 99.90th=[56361], 99.95th=[56361], 00:28:13.384 | 99.99th=[65274] 00:28:13.384 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1913.60, stdev=50.44, samples=20 00:28:13.384 iops : min= 448, max= 512, avg=478.40, stdev=12.61, samples=20 00:28:13.384 lat (msec) : 50=99.67%, 100=0.33% 00:28:13.384 cpu : usr=95.18%, sys=2.90%, ctx=332, majf=0, minf=48 00:28:13.384 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:13.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.384 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.384 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.384 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:13.384 filename2: (groupid=0, jobs=1): err= 0: pid=3875768: Mon Jul 15 14:07:06 2024 00:28:13.384 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10007msec) 00:28:13.384 slat (nsec): min=8353, max=92703, avg=33798.96, stdev=12660.72 00:28:13.384 clat (usec): min=12191, max=58942, avg=33042.90, stdev=2143.86 00:28:13.384 lat (usec): min=12203, max=58995, avg=33076.70, stdev=2143.87 00:28:13.384 clat percentiles (usec): 00:28:13.384 | 1.00th=[31589], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:28:13.384 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:28:13.384 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:28:13.384 | 99.00th=[39060], 99.50th=[43779], 99.90th=[58983], 99.95th=[58983], 00:28:13.384 | 99.99th=[58983] 00:28:13.384 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1913.26, stdev=51.80, samples=19 00:28:13.384 iops : min= 448, max= 512, avg=478.32, stdev=12.95, samples=19 00:28:13.384 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:28:13.384 cpu : usr=97.46%, sys=1.84%, ctx=29, majf=0, minf=35 00:28:13.384 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:13.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.384 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.384 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.384 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:13.384 filename2: (groupid=0, jobs=1): err= 0: pid=3875769: Mon Jul 15 14:07:06 2024 00:28:13.384 read: IOPS=479, BW=1917KiB/s (1963kB/s)(18.8MiB/10017msec) 00:28:13.385 slat (usec): min=8, max=113, avg=36.03, stdev=19.05 00:28:13.385 clat (usec): min=23891, max=56043, avg=33087.33, stdev=1743.25 00:28:13.385 lat (usec): min=23932, max=56066, avg=33123.36, stdev=1740.27 00:28:13.385 clat percentiles (usec): 00:28:13.385 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:28:13.385 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:28:13.385 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:28:13.385 | 99.00th=[40109], 99.50th=[42206], 99.90th=[55837], 99.95th=[55837], 00:28:13.385 | 99.99th=[55837] 00:28:13.385 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1913.60, stdev=50.44, samples=20 00:28:13.385 iops : min= 448, max= 512, avg=478.40, stdev=12.61, samples=20 00:28:13.385 lat (msec) : 50=99.67%, 100=0.33% 00:28:13.385 cpu : usr=97.04%, sys=2.08%, ctx=91, majf=0, minf=44 00:28:13.385 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:13.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.385 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.385 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.385 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:13.385 filename2: (groupid=0, jobs=1): err= 0: pid=3875770: Mon Jul 15 14:07:06 2024 00:28:13.385 read: IOPS=479, BW=1918KiB/s (1965kB/s)(18.8MiB/10008msec) 00:28:13.385 slat (usec): min=11, max=113, avg=46.47, stdev=19.09 00:28:13.385 clat (usec): min=12272, max=69764, avg=32939.19, stdev=2690.87 00:28:13.385 lat (usec): min=12295, max=69807, avg=32985.66, stdev=2689.16 00:28:13.385 clat percentiles (usec): 00:28:13.385 | 1.00th=[31327], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:28:13.385 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:28:13.385 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:28:13.385 | 99.00th=[40109], 99.50th=[41681], 99.90th=[69731], 99.95th=[69731], 00:28:13.385 | 99.99th=[69731] 00:28:13.385 bw ( KiB/s): min= 1667, max= 2048, per=4.15%, avg=1913.42, stdev=66.49, samples=19 00:28:13.385 iops : min= 416, max= 512, avg=478.32, stdev=16.78, samples=19 00:28:13.385 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:28:13.385 cpu : usr=98.06%, sys=1.48%, ctx=28, majf=0, minf=44 00:28:13.385 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:13.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.385 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.385 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.385 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:13.385 filename2: (groupid=0, jobs=1): err= 0: pid=3875771: Mon Jul 15 14:07:06 2024 00:28:13.385 read: IOPS=482, BW=1930KiB/s (1976kB/s)(18.9MiB/10016msec) 00:28:13.385 slat (usec): min=8, max=131, avg=32.98, stdev=15.56 00:28:13.385 clat (usec): min=10925, max=43060, avg=32863.08, stdev=2035.58 00:28:13.385 lat (usec): min=10938, max=43084, avg=32896.06, stdev=2034.01 00:28:13.385 clat percentiles (usec): 00:28:13.385 | 1.00th=[27395], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:28:13.385 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:28:13.385 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:28:13.385 | 99.00th=[39584], 99.50th=[40109], 99.90th=[42730], 99.95th=[42730], 00:28:13.385 | 99.99th=[43254] 00:28:13.385 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1926.40, stdev=50.44, samples=20 00:28:13.385 iops : min= 448, max= 512, avg=481.60, stdev=12.61, samples=20 00:28:13.385 lat (msec) : 20=0.62%, 50=99.38% 00:28:13.385 cpu : usr=96.40%, sys=2.29%, ctx=180, majf=0, minf=47 00:28:13.385 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:13.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.385 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.385 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.385 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:13.385 filename2: (groupid=0, jobs=1): err= 0: pid=3875772: Mon Jul 15 14:07:06 2024 00:28:13.385 read: IOPS=480, BW=1922KiB/s (1968kB/s)(18.8MiB/10006msec) 00:28:13.385 slat (usec): min=8, max=108, avg=24.08, stdev=17.88 00:28:13.385 clat (usec): min=7449, max=95771, avg=33202.84, stdev=3496.51 00:28:13.385 lat (usec): min=7465, max=95813, avg=33226.92, stdev=3496.89 00:28:13.385 clat percentiles (usec): 00:28:13.385 | 1.00th=[21365], 5.00th=[32375], 10.00th=[32900], 20.00th=[32900], 00:28:13.385 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:28:13.385 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:28:13.385 | 99.00th=[40633], 99.50th=[44303], 99.90th=[76022], 99.95th=[76022], 00:28:13.385 | 99.99th=[95945] 00:28:13.385 bw ( KiB/s): min= 1811, max= 1968, per=4.15%, avg=1914.26, stdev=39.19, samples=19 00:28:13.385 iops : min= 452, max= 492, avg=478.53, stdev= 9.91, samples=19 00:28:13.385 lat (msec) : 10=0.12%, 20=0.35%, 50=99.02%, 100=0.50% 00:28:13.385 cpu : usr=96.29%, sys=2.26%, ctx=125, majf=0, minf=57 00:28:13.385 IO depths : 1=0.1%, 2=0.1%, 4=0.6%, 8=80.7%, 16=18.5%, 32=0.0%, >=64=0.0% 00:28:13.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.385 complete : 0=0.0%, 4=89.5%, 8=10.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.385 issued rwts: total=4808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.385 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:13.385 00:28:13.385 Run status group 0 (all jobs): 00:28:13.385 READ: bw=45.0MiB/s (47.2MB/s), 1911KiB/s-1942KiB/s (1957kB/s-1988kB/s), io=451MiB (473MB), run=10002-10020msec 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:13.385 bdev_null0 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.385 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:13.386 [2024-07-15 14:07:06.748847] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:13.386 bdev_null1 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:13.386 { 00:28:13.386 "params": { 00:28:13.386 "name": "Nvme$subsystem", 00:28:13.386 "trtype": "$TEST_TRANSPORT", 00:28:13.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.386 "adrfam": "ipv4", 00:28:13.386 "trsvcid": "$NVMF_PORT", 00:28:13.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.386 "hdgst": ${hdgst:-false}, 00:28:13.386 "ddgst": ${ddgst:-false} 00:28:13.386 }, 00:28:13.386 "method": "bdev_nvme_attach_controller" 00:28:13.386 } 00:28:13.386 EOF 00:28:13.386 )") 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:13.386 { 00:28:13.386 "params": { 00:28:13.386 "name": "Nvme$subsystem", 00:28:13.386 "trtype": "$TEST_TRANSPORT", 00:28:13.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.386 "adrfam": "ipv4", 00:28:13.386 "trsvcid": "$NVMF_PORT", 00:28:13.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.386 "hdgst": ${hdgst:-false}, 00:28:13.386 "ddgst": ${ddgst:-false} 00:28:13.386 }, 00:28:13.386 "method": "bdev_nvme_attach_controller" 00:28:13.386 } 00:28:13.386 EOF 00:28:13.386 )") 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:13.386 "params": { 00:28:13.386 "name": "Nvme0", 00:28:13.386 "trtype": "tcp", 00:28:13.386 "traddr": "10.0.0.2", 00:28:13.386 "adrfam": "ipv4", 00:28:13.386 "trsvcid": "4420", 00:28:13.386 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:13.386 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:13.386 "hdgst": false, 00:28:13.386 "ddgst": false 00:28:13.386 }, 00:28:13.386 "method": "bdev_nvme_attach_controller" 00:28:13.386 },{ 00:28:13.386 "params": { 00:28:13.386 "name": "Nvme1", 00:28:13.386 "trtype": "tcp", 00:28:13.386 "traddr": "10.0.0.2", 00:28:13.386 "adrfam": "ipv4", 00:28:13.386 "trsvcid": "4420", 00:28:13.386 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:13.386 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:13.386 "hdgst": false, 00:28:13.386 "ddgst": false 00:28:13.386 }, 00:28:13.386 "method": "bdev_nvme_attach_controller" 00:28:13.386 }' 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:13.386 14:07:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:13.386 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:13.386 ... 00:28:13.386 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:13.386 ... 00:28:13.386 fio-3.35 00:28:13.386 Starting 4 threads 00:28:13.386 EAL: No free 2048 kB hugepages reported on node 1 00:28:18.783 00:28:18.783 filename0: (groupid=0, jobs=1): err= 0: pid=3877153: Mon Jul 15 14:07:12 2024 00:28:18.783 read: IOPS=1985, BW=15.5MiB/s (16.3MB/s)(77.6MiB/5002msec) 00:28:18.783 slat (nsec): min=4179, max=68915, avg=20777.72, stdev=9259.25 00:28:18.783 clat (usec): min=919, max=7448, avg=3958.46, stdev=312.34 00:28:18.783 lat (usec): min=932, max=7472, avg=3979.24, stdev=312.84 00:28:18.783 clat percentiles (usec): 00:28:18.783 | 1.00th=[ 3195], 5.00th=[ 3654], 10.00th=[ 3720], 20.00th=[ 3785], 00:28:18.783 | 30.00th=[ 3851], 40.00th=[ 3884], 50.00th=[ 3916], 60.00th=[ 3982], 00:28:18.783 | 70.00th=[ 4047], 80.00th=[ 4146], 90.00th=[ 4293], 95.00th=[ 4359], 00:28:18.783 | 99.00th=[ 4752], 99.50th=[ 5407], 99.90th=[ 6718], 99.95th=[ 7308], 00:28:18.783 | 99.99th=[ 7439] 00:28:18.783 bw ( KiB/s): min=15232, max=16272, per=24.99%, avg=15863.11, stdev=324.58, samples=9 00:28:18.783 iops : min= 1904, max= 2034, avg=1982.89, stdev=40.57, samples=9 00:28:18.783 lat (usec) : 1000=0.01% 00:28:18.783 lat (msec) : 2=0.16%, 4=64.10%, 10=35.73% 00:28:18.783 cpu : usr=94.48%, sys=4.60%, ctx=143, majf=0, minf=41 00:28:18.783 IO depths : 1=0.2%, 2=13.4%, 4=61.0%, 8=25.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:18.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.783 complete : 0=0.0%, 4=90.5%, 8=9.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.783 issued rwts: total=9931,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.783 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:18.783 filename0: (groupid=0, jobs=1): err= 0: pid=3877154: Mon Jul 15 14:07:12 2024 00:28:18.783 read: IOPS=1978, BW=15.5MiB/s (16.2MB/s)(77.4MiB/5005msec) 00:28:18.783 slat (nsec): min=3590, max=76058, avg=21859.30, stdev=9534.32 00:28:18.783 clat (usec): min=784, max=9582, avg=3972.19, stdev=328.65 00:28:18.783 lat (usec): min=814, max=9607, avg=3994.05, stdev=328.70 00:28:18.783 clat percentiles (usec): 00:28:18.783 | 1.00th=[ 3425], 5.00th=[ 3654], 10.00th=[ 3720], 20.00th=[ 3785], 00:28:18.783 | 30.00th=[ 3851], 40.00th=[ 3884], 50.00th=[ 3949], 60.00th=[ 3982], 00:28:18.783 | 70.00th=[ 4047], 80.00th=[ 4113], 90.00th=[ 4293], 95.00th=[ 4359], 00:28:18.783 | 99.00th=[ 4883], 99.50th=[ 5669], 99.90th=[ 6783], 99.95th=[ 9372], 00:28:18.783 | 99.99th=[ 9634] 00:28:18.783 bw ( KiB/s): min=15360, max=16256, per=24.94%, avg=15828.80, stdev=305.27, samples=10 00:28:18.783 iops : min= 1920, max= 2032, avg=1978.60, stdev=38.16, samples=10 00:28:18.783 lat (usec) : 1000=0.02% 00:28:18.783 lat (msec) : 2=0.05%, 4=63.26%, 10=36.67% 00:28:18.783 cpu : usr=94.32%, sys=5.14%, ctx=17, majf=0, minf=80 00:28:18.783 IO depths : 1=0.3%, 2=12.2%, 4=62.1%, 8=25.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:18.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.783 complete : 0=0.0%, 4=90.6%, 8=9.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.783 issued rwts: total=9901,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.783 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:18.783 filename1: (groupid=0, jobs=1): err= 0: pid=3877155: Mon Jul 15 14:07:12 2024 00:28:18.783 read: IOPS=1989, BW=15.5MiB/s (16.3MB/s)(77.7MiB/5001msec) 00:28:18.783 slat (nsec): min=4104, max=82402, avg=24466.24, stdev=10189.09 00:28:18.783 clat (usec): min=789, max=7291, avg=3927.88, stdev=297.84 00:28:18.783 lat (usec): min=802, max=7313, avg=3952.34, stdev=298.46 00:28:18.783 clat percentiles (usec): 00:28:18.783 | 1.00th=[ 3163], 5.00th=[ 3621], 10.00th=[ 3687], 20.00th=[ 3752], 00:28:18.783 | 30.00th=[ 3818], 40.00th=[ 3851], 50.00th=[ 3916], 60.00th=[ 3949], 00:28:18.783 | 70.00th=[ 4015], 80.00th=[ 4113], 90.00th=[ 4228], 95.00th=[ 4359], 00:28:18.783 | 99.00th=[ 4621], 99.50th=[ 5145], 99.90th=[ 6325], 99.95th=[ 6587], 00:28:18.783 | 99.99th=[ 7308] 00:28:18.783 bw ( KiB/s): min=15232, max=16272, per=25.05%, avg=15896.89, stdev=335.44, samples=9 00:28:18.783 iops : min= 1904, max= 2034, avg=1987.11, stdev=41.93, samples=9 00:28:18.783 lat (usec) : 1000=0.04% 00:28:18.783 lat (msec) : 2=0.13%, 4=68.05%, 10=31.78% 00:28:18.783 cpu : usr=94.14%, sys=4.84%, ctx=38, majf=0, minf=46 00:28:18.783 IO depths : 1=0.6%, 2=22.4%, 4=52.1%, 8=24.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:18.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.783 complete : 0=0.0%, 4=90.4%, 8=9.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.783 issued rwts: total=9948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.783 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:18.783 filename1: (groupid=0, jobs=1): err= 0: pid=3877156: Mon Jul 15 14:07:12 2024 00:28:18.783 read: IOPS=1984, BW=15.5MiB/s (16.3MB/s)(77.5MiB/5001msec) 00:28:18.783 slat (usec): min=3, max=144, avg=24.31, stdev=11.20 00:28:18.783 clat (usec): min=719, max=7395, avg=3933.43, stdev=338.32 00:28:18.783 lat (usec): min=738, max=7407, avg=3957.74, stdev=338.78 00:28:18.783 clat percentiles (usec): 00:28:18.783 | 1.00th=[ 3097], 5.00th=[ 3621], 10.00th=[ 3687], 20.00th=[ 3752], 00:28:18.783 | 30.00th=[ 3818], 40.00th=[ 3851], 50.00th=[ 3916], 60.00th=[ 3949], 00:28:18.783 | 70.00th=[ 4015], 80.00th=[ 4113], 90.00th=[ 4228], 95.00th=[ 4359], 00:28:18.783 | 99.00th=[ 4817], 99.50th=[ 5276], 99.90th=[ 6915], 99.95th=[ 7177], 00:28:18.783 | 99.99th=[ 7373] 00:28:18.783 bw ( KiB/s): min=15216, max=16272, per=25.01%, avg=15872.00, stdev=352.65, samples=10 00:28:18.783 iops : min= 1902, max= 2034, avg=1984.00, stdev=44.08, samples=10 00:28:18.783 lat (usec) : 750=0.02%, 1000=0.03% 00:28:18.783 lat (msec) : 2=0.33%, 4=67.11%, 10=32.51% 00:28:18.783 cpu : usr=82.96%, sys=9.06%, ctx=187, majf=0, minf=38 00:28:18.783 IO depths : 1=0.3%, 2=23.1%, 4=51.5%, 8=25.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:18.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.783 complete : 0=0.0%, 4=90.3%, 8=9.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.783 issued rwts: total=9926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.783 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:18.783 00:28:18.783 Run status group 0 (all jobs): 00:28:18.783 READ: bw=62.0MiB/s (65.0MB/s), 15.5MiB/s-15.5MiB/s (16.2MB/s-16.3MB/s), io=310MiB (325MB), run=5001-5005msec 00:28:18.783 14:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:18.783 14:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:18.783 14:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:18.783 14:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:18.783 14:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:18.783 14:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:18.783 14:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.783 14:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:18.783 14:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.783 14:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:18.783 14:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.783 14:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:18.783 14:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.783 14:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:18.783 14:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:18.783 14:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:18.783 14:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:18.783 14:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.783 14:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:18.783 14:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.783 14:07:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:18.783 14:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.783 14:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:18.783 14:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.783 00:28:18.783 real 0m24.223s 00:28:18.783 user 4m28.975s 00:28:18.783 sys 0m8.561s 00:28:18.783 14:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:18.783 14:07:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:18.783 ************************************ 00:28:18.783 END TEST fio_dif_rand_params 00:28:18.783 ************************************ 00:28:18.783 14:07:13 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:18.783 14:07:13 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:18.783 14:07:13 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:18.784 14:07:13 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:18.784 14:07:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:18.784 ************************************ 00:28:18.784 START TEST fio_dif_digest 00:28:18.784 ************************************ 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:18.784 bdev_null0 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:18.784 [2024-07-15 14:07:13.249398] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:18.784 { 00:28:18.784 "params": { 00:28:18.784 "name": "Nvme$subsystem", 00:28:18.784 "trtype": "$TEST_TRANSPORT", 00:28:18.784 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.784 "adrfam": "ipv4", 00:28:18.784 "trsvcid": "$NVMF_PORT", 00:28:18.784 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.784 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.784 "hdgst": ${hdgst:-false}, 00:28:18.784 "ddgst": ${ddgst:-false} 00:28:18.784 }, 00:28:18.784 "method": "bdev_nvme_attach_controller" 00:28:18.784 } 00:28:18.784 EOF 00:28:18.784 )") 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:18.784 "params": { 00:28:18.784 "name": "Nvme0", 00:28:18.784 "trtype": "tcp", 00:28:18.784 "traddr": "10.0.0.2", 00:28:18.784 "adrfam": "ipv4", 00:28:18.784 "trsvcid": "4420", 00:28:18.784 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:18.784 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:18.784 "hdgst": true, 00:28:18.784 "ddgst": true 00:28:18.784 }, 00:28:18.784 "method": "bdev_nvme_attach_controller" 00:28:18.784 }' 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:18.784 14:07:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:18.784 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:18.784 ... 00:28:18.784 fio-3.35 00:28:18.784 Starting 3 threads 00:28:18.784 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.975 00:28:30.975 filename0: (groupid=0, jobs=1): err= 0: pid=3877939: Mon Jul 15 14:07:24 2024 00:28:30.975 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(257MiB/10003msec) 00:28:30.975 slat (usec): min=6, max=133, avg=22.25, stdev= 7.14 00:28:30.975 clat (usec): min=9470, max=18031, avg=14590.29, stdev=1017.39 00:28:30.975 lat (usec): min=9492, max=18042, avg=14612.54, stdev=1017.40 00:28:30.975 clat percentiles (usec): 00:28:30.975 | 1.00th=[12256], 5.00th=[13042], 10.00th=[13304], 20.00th=[13829], 00:28:30.975 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14484], 60.00th=[14746], 00:28:30.975 | 70.00th=[15008], 80.00th=[15401], 90.00th=[15926], 95.00th=[16319], 00:28:30.975 | 99.00th=[17171], 99.50th=[17433], 99.90th=[17957], 99.95th=[17957], 00:28:30.975 | 99.99th=[17957] 00:28:30.975 bw ( KiB/s): min=25344, max=26880, per=32.76%, avg=26260.21, stdev=393.98, samples=19 00:28:30.975 iops : min= 198, max= 210, avg=205.16, stdev= 3.08, samples=19 00:28:30.975 lat (msec) : 10=0.05%, 20=99.95% 00:28:30.975 cpu : usr=90.48%, sys=6.66%, ctx=136, majf=0, minf=133 00:28:30.975 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:30.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.975 issued rwts: total=2053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.975 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:30.975 filename0: (groupid=0, jobs=1): err= 0: pid=3877940: Mon Jul 15 14:07:24 2024 00:28:30.975 read: IOPS=212, BW=26.6MiB/s (27.9MB/s)(267MiB/10047msec) 00:28:30.975 slat (nsec): min=4546, max=65707, avg=17545.02, stdev=5674.64 00:28:30.975 clat (usec): min=10509, max=51743, avg=14075.51, stdev=1515.30 00:28:30.975 lat (usec): min=10526, max=51762, avg=14093.05, stdev=1515.17 00:28:30.975 clat percentiles (usec): 00:28:30.975 | 1.00th=[11863], 5.00th=[12518], 10.00th=[12780], 20.00th=[13173], 00:28:30.975 | 30.00th=[13566], 40.00th=[13829], 50.00th=[13960], 60.00th=[14222], 00:28:30.975 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15270], 95.00th=[15795], 00:28:30.975 | 99.00th=[16712], 99.50th=[17171], 99.90th=[22414], 99.95th=[50594], 00:28:30.975 | 99.99th=[51643] 00:28:30.975 bw ( KiB/s): min=26315, max=28160, per=34.06%, avg=27299.75, stdev=525.64, samples=20 00:28:30.975 iops : min= 205, max= 220, avg=213.25, stdev= 4.17, samples=20 00:28:30.975 lat (msec) : 20=99.86%, 50=0.05%, 100=0.09% 00:28:30.975 cpu : usr=92.48%, sys=7.05%, ctx=19, majf=0, minf=128 00:28:30.975 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:30.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.975 issued rwts: total=2135,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.975 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:30.975 filename0: (groupid=0, jobs=1): err= 0: pid=3877942: Mon Jul 15 14:07:24 2024 00:28:30.975 read: IOPS=209, BW=26.2MiB/s (27.4MB/s)(263MiB/10046msec) 00:28:30.975 slat (nsec): min=4515, max=54136, avg=17828.06, stdev=5858.25 00:28:30.975 clat (usec): min=10600, max=54048, avg=14289.23, stdev=1508.34 00:28:30.975 lat (usec): min=10619, max=54067, avg=14307.05, stdev=1508.32 00:28:30.975 clat percentiles (usec): 00:28:30.975 | 1.00th=[11863], 5.00th=[12649], 10.00th=[12911], 20.00th=[13435], 00:28:30.975 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:28:30.975 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15533], 95.00th=[15926], 00:28:30.975 | 99.00th=[16909], 99.50th=[17171], 99.90th=[18220], 99.95th=[46924], 00:28:30.975 | 99.99th=[54264] 00:28:30.975 bw ( KiB/s): min=25856, max=28672, per=33.55%, avg=26892.80, stdev=645.90, samples=20 00:28:30.975 iops : min= 202, max= 224, avg=210.10, stdev= 5.05, samples=20 00:28:30.975 lat (msec) : 20=99.90%, 50=0.05%, 100=0.05% 00:28:30.975 cpu : usr=92.58%, sys=6.95%, ctx=24, majf=0, minf=133 00:28:30.975 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:30.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.975 issued rwts: total=2103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.975 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:30.975 00:28:30.975 Run status group 0 (all jobs): 00:28:30.975 READ: bw=78.3MiB/s (82.1MB/s), 25.7MiB/s-26.6MiB/s (26.9MB/s-27.9MB/s), io=786MiB (825MB), run=10003-10047msec 00:28:30.975 14:07:24 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:30.975 14:07:24 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:30.975 14:07:24 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:30.975 14:07:24 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:30.975 14:07:24 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:30.975 14:07:24 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:30.975 14:07:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.975 14:07:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:30.975 14:07:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.975 14:07:24 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:30.975 14:07:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.975 14:07:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:30.975 14:07:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.975 00:28:30.975 real 0m11.157s 00:28:30.975 user 0m28.738s 00:28:30.975 sys 0m2.344s 00:28:30.975 14:07:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:30.975 14:07:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:30.975 ************************************ 00:28:30.975 END TEST fio_dif_digest 00:28:30.975 ************************************ 00:28:30.975 14:07:24 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:30.975 14:07:24 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:30.975 14:07:24 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:30.975 14:07:24 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:30.975 14:07:24 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:28:30.975 14:07:24 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:30.975 14:07:24 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:28:30.975 14:07:24 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:30.975 14:07:24 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:30.975 rmmod nvme_tcp 00:28:30.975 rmmod nvme_fabrics 00:28:30.975 rmmod nvme_keyring 00:28:30.975 14:07:24 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:30.975 14:07:24 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:28:30.975 14:07:24 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:28:30.975 14:07:24 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3871967 ']' 00:28:30.975 14:07:24 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3871967 00:28:30.975 14:07:24 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 3871967 ']' 00:28:30.975 14:07:24 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 3871967 00:28:30.975 14:07:24 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:28:30.975 14:07:24 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:30.975 14:07:24 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3871967 00:28:30.975 14:07:24 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:30.975 14:07:24 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:30.975 14:07:24 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3871967' 00:28:30.975 killing process with pid 3871967 00:28:30.975 14:07:24 nvmf_dif -- common/autotest_common.sh@967 -- # kill 3871967 00:28:30.975 14:07:24 nvmf_dif -- common/autotest_common.sh@972 -- # wait 3871967 00:28:30.975 14:07:24 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:30.975 14:07:24 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:31.232 Waiting for block devices as requested 00:28:31.232 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:28:31.232 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:31.489 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:31.489 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:31.747 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:31.747 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:31.747 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:31.747 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:32.006 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:32.006 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:32.006 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:32.265 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:32.265 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:32.265 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:32.265 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:32.523 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:32.523 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:32.523 14:07:27 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:32.524 14:07:27 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:32.524 14:07:27 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:32.524 14:07:27 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:32.524 14:07:27 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.524 14:07:27 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:32.524 14:07:27 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:35.060 14:07:29 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:35.060 00:28:35.060 real 1m7.161s 00:28:35.060 user 6m24.692s 00:28:35.060 sys 0m21.323s 00:28:35.060 14:07:29 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:35.060 14:07:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:35.060 ************************************ 00:28:35.060 END TEST nvmf_dif 00:28:35.060 ************************************ 00:28:35.060 14:07:29 -- common/autotest_common.sh@1142 -- # return 0 00:28:35.061 14:07:29 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:35.061 14:07:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:35.061 14:07:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:35.061 14:07:29 -- common/autotest_common.sh@10 -- # set +x 00:28:35.061 ************************************ 00:28:35.061 START TEST nvmf_abort_qd_sizes 00:28:35.061 ************************************ 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:35.061 * Looking for test storage... 00:28:35.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:28:35.061 14:07:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:36.963 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:36.963 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:36.964 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:36.964 Found net devices under 0000:84:00.0: cvl_0_0 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:36.964 Found net devices under 0000:84:00.1: cvl_0_1 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:36.964 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:36.964 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:28:36.964 00:28:36.964 --- 10.0.0.2 ping statistics --- 00:28:36.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.964 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:36.964 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:36.964 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:28:36.964 00:28:36.964 --- 10.0.0.1 ping statistics --- 00:28:36.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.964 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:36.964 14:07:31 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:38.338 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:38.338 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:38.338 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:38.338 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:38.338 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:38.338 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:38.338 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:38.338 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:38.338 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:38.338 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:38.338 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:38.338 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:38.338 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:38.338 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:38.338 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:38.338 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:38.917 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:28:39.208 14:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:39.208 14:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:39.208 14:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:39.208 14:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:39.208 14:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:39.208 14:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:39.208 14:07:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:39.208 14:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:39.208 14:07:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:39.208 14:07:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:39.208 14:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3882853 00:28:39.208 14:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:39.208 14:07:33 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3882853 00:28:39.208 14:07:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 3882853 ']' 00:28:39.208 14:07:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.208 14:07:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:39.208 14:07:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.208 14:07:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:39.208 14:07:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:39.209 [2024-07-15 14:07:33.983361] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:28:39.209 [2024-07-15 14:07:33.983440] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.209 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.492 [2024-07-15 14:07:34.051162] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:39.492 [2024-07-15 14:07:34.155253] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:39.492 [2024-07-15 14:07:34.155324] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:39.492 [2024-07-15 14:07:34.155347] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:39.492 [2024-07-15 14:07:34.155357] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:39.492 [2024-07-15 14:07:34.155367] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:39.492 [2024-07-15 14:07:34.155444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.492 [2024-07-15 14:07:34.155554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:39.492 [2024-07-15 14:07:34.155643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:39.492 [2024-07-15 14:07:34.155646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.492 14:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:39.492 14:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:28:39.492 14:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:39.492 14:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:39.492 14:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:39.492 14:07:34 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:39.492 14:07:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:39.492 14:07:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:39.492 14:07:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:39.492 14:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:28:39.492 14:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:28:39.492 14:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:82:00.0 ]] 00:28:39.492 14:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:28:39.492 14:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:39.492 14:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:82:00.0 ]] 00:28:39.492 14:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:28:39.492 14:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:39.492 14:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:39.492 14:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:28:39.492 14:07:34 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:82:00.0 00:28:39.492 14:07:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:28:39.492 14:07:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:82:00.0 00:28:39.492 14:07:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:39.492 14:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:39.492 14:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:39.492 14:07:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:39.749 ************************************ 00:28:39.749 START TEST spdk_target_abort 00:28:39.749 ************************************ 00:28:39.749 14:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:28:39.749 14:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:39.749 14:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:82:00.0 -b spdk_target 00:28:39.749 14:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.749 14:07:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:43.021 spdk_targetn1 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:43.021 [2024-07-15 14:07:37.206635] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:43.021 [2024-07-15 14:07:37.238898] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:43.021 14:07:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:43.021 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.299 Initializing NVMe Controllers 00:28:46.299 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:46.299 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:46.299 Initialization complete. Launching workers. 00:28:46.299 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11370, failed: 0 00:28:46.299 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1239, failed to submit 10131 00:28:46.299 success 730, unsuccess 509, failed 0 00:28:46.299 14:07:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:46.299 14:07:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:46.299 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.571 Initializing NVMe Controllers 00:28:49.571 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:49.571 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:49.571 Initialization complete. Launching workers. 00:28:49.571 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8483, failed: 0 00:28:49.571 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1290, failed to submit 7193 00:28:49.571 success 297, unsuccess 993, failed 0 00:28:49.571 14:07:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:49.571 14:07:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:49.571 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.095 Initializing NVMe Controllers 00:28:52.095 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:52.095 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:52.095 Initialization complete. Launching workers. 00:28:52.095 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31738, failed: 0 00:28:52.095 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2700, failed to submit 29038 00:28:52.095 success 535, unsuccess 2165, failed 0 00:28:52.095 14:07:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:52.095 14:07:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.095 14:07:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:52.095 14:07:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.095 14:07:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:52.095 14:07:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.095 14:07:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:53.465 14:07:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.465 14:07:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3882853 00:28:53.465 14:07:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 3882853 ']' 00:28:53.465 14:07:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 3882853 00:28:53.465 14:07:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:28:53.465 14:07:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:53.465 14:07:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3882853 00:28:53.465 14:07:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:53.465 14:07:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:53.465 14:07:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3882853' 00:28:53.465 killing process with pid 3882853 00:28:53.465 14:07:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 3882853 00:28:53.465 14:07:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 3882853 00:28:53.723 00:28:53.723 real 0m14.166s 00:28:53.723 user 0m53.552s 00:28:53.723 sys 0m2.825s 00:28:53.723 14:07:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:53.723 14:07:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:53.723 ************************************ 00:28:53.723 END TEST spdk_target_abort 00:28:53.723 ************************************ 00:28:53.723 14:07:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:28:53.723 14:07:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:53.723 14:07:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:53.723 14:07:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:53.723 14:07:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:53.981 ************************************ 00:28:53.981 START TEST kernel_target_abort 00:28:53.981 ************************************ 00:28:53.981 14:07:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:28:53.981 14:07:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:53.981 14:07:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:28:53.981 14:07:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:53.981 14:07:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:53.981 14:07:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.981 14:07:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.981 14:07:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:53.981 14:07:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.981 14:07:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:53.981 14:07:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:53.981 14:07:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:53.981 14:07:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:53.981 14:07:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:53.981 14:07:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:53.981 14:07:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:53.981 14:07:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:53.981 14:07:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:53.981 14:07:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:28:53.981 14:07:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:53.981 14:07:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:53.981 14:07:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:53.981 14:07:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:54.912 Waiting for block devices as requested 00:28:54.912 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:28:55.171 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:55.171 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:55.438 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:55.438 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:55.438 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:55.438 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:55.697 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:55.698 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:55.698 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:55.698 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:55.956 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:55.956 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:55.956 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:56.213 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:56.213 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:56.213 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:56.471 No valid GPT data, bailing 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:28:56.471 00:28:56.471 Discovery Log Number of Records 2, Generation counter 2 00:28:56.471 =====Discovery Log Entry 0====== 00:28:56.471 trtype: tcp 00:28:56.471 adrfam: ipv4 00:28:56.471 subtype: current discovery subsystem 00:28:56.471 treq: not specified, sq flow control disable supported 00:28:56.471 portid: 1 00:28:56.471 trsvcid: 4420 00:28:56.471 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:56.471 traddr: 10.0.0.1 00:28:56.471 eflags: none 00:28:56.471 sectype: none 00:28:56.471 =====Discovery Log Entry 1====== 00:28:56.471 trtype: tcp 00:28:56.471 adrfam: ipv4 00:28:56.471 subtype: nvme subsystem 00:28:56.471 treq: not specified, sq flow control disable supported 00:28:56.471 portid: 1 00:28:56.471 trsvcid: 4420 00:28:56.471 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:56.471 traddr: 10.0.0.1 00:28:56.471 eflags: none 00:28:56.471 sectype: none 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:56.471 14:07:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:56.471 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.746 Initializing NVMe Controllers 00:28:59.746 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:59.746 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:59.746 Initialization complete. Launching workers. 00:28:59.746 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 50968, failed: 0 00:28:59.746 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 50968, failed to submit 0 00:28:59.746 success 0, unsuccess 50968, failed 0 00:28:59.746 14:07:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:59.746 14:07:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:59.746 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.020 Initializing NVMe Controllers 00:29:03.020 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:03.020 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:03.020 Initialization complete. Launching workers. 00:29:03.020 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 93210, failed: 0 00:29:03.020 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23522, failed to submit 69688 00:29:03.020 success 0, unsuccess 23522, failed 0 00:29:03.020 14:07:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:03.020 14:07:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:03.020 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.294 Initializing NVMe Controllers 00:29:06.294 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:06.294 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:06.294 Initialization complete. Launching workers. 00:29:06.294 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 91221, failed: 0 00:29:06.294 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22810, failed to submit 68411 00:29:06.294 success 0, unsuccess 22810, failed 0 00:29:06.294 14:08:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:29:06.294 14:08:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:06.294 14:08:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:29:06.294 14:08:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:06.294 14:08:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:06.294 14:08:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:06.294 14:08:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:06.294 14:08:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:06.294 14:08:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:06.294 14:08:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:07.230 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:07.230 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:07.230 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:07.230 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:07.230 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:07.231 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:07.231 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:07.231 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:07.231 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:07.231 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:07.231 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:07.231 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:07.231 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:07.231 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:07.231 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:07.231 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:08.167 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:29:08.167 00:29:08.167 real 0m14.418s 00:29:08.167 user 0m6.165s 00:29:08.167 sys 0m3.367s 00:29:08.167 14:08:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:08.167 14:08:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:08.167 ************************************ 00:29:08.167 END TEST kernel_target_abort 00:29:08.167 ************************************ 00:29:08.425 14:08:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:29:08.425 14:08:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:08.425 14:08:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:29:08.425 14:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:08.425 14:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:29:08.425 14:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:08.425 14:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:29:08.425 14:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:08.425 14:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:08.425 rmmod nvme_tcp 00:29:08.425 rmmod nvme_fabrics 00:29:08.425 rmmod nvme_keyring 00:29:08.425 14:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:08.425 14:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:29:08.425 14:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:29:08.425 14:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3882853 ']' 00:29:08.425 14:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3882853 00:29:08.425 14:08:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 3882853 ']' 00:29:08.425 14:08:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 3882853 00:29:08.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3882853) - No such process 00:29:08.425 14:08:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 3882853 is not found' 00:29:08.425 Process with pid 3882853 is not found 00:29:08.425 14:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:08.425 14:08:03 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:09.799 Waiting for block devices as requested 00:29:09.799 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:29:09.799 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:09.799 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:09.799 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:10.057 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:10.057 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:10.057 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:10.057 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:10.057 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:10.316 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:10.316 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:10.316 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:10.316 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:10.602 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:10.602 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:10.602 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:10.602 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:10.888 14:08:05 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:10.888 14:08:05 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:10.888 14:08:05 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:10.888 14:08:05 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:10.888 14:08:05 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.888 14:08:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:10.888 14:08:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.792 14:08:07 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:12.792 00:29:12.792 real 0m38.168s 00:29:12.792 user 1m1.905s 00:29:12.792 sys 0m9.702s 00:29:12.792 14:08:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:12.792 14:08:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:12.792 ************************************ 00:29:12.792 END TEST nvmf_abort_qd_sizes 00:29:12.792 ************************************ 00:29:12.792 14:08:07 -- common/autotest_common.sh@1142 -- # return 0 00:29:12.792 14:08:07 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:12.792 14:08:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:12.792 14:08:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:12.792 14:08:07 -- common/autotest_common.sh@10 -- # set +x 00:29:12.792 ************************************ 00:29:12.792 START TEST keyring_file 00:29:12.792 ************************************ 00:29:12.792 14:08:07 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:13.051 * Looking for test storage... 00:29:13.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:13.051 14:08:07 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:13.051 14:08:07 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:13.051 14:08:07 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:29:13.051 14:08:07 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:13.051 14:08:07 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:13.051 14:08:07 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:13.051 14:08:07 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:13.051 14:08:07 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:13.051 14:08:07 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:13.051 14:08:07 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:13.051 14:08:07 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:13.051 14:08:07 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:13.051 14:08:07 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:13.051 14:08:07 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:13.051 14:08:07 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:13.051 14:08:07 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:13.051 14:08:07 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:13.051 14:08:07 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:13.051 14:08:07 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:13.051 14:08:07 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:13.051 14:08:07 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:13.051 14:08:07 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:13.051 14:08:07 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:13.051 14:08:07 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.051 14:08:07 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.052 14:08:07 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.052 14:08:07 keyring_file -- paths/export.sh@5 -- # export PATH 00:29:13.052 14:08:07 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.052 14:08:07 keyring_file -- nvmf/common.sh@47 -- # : 0 00:29:13.052 14:08:07 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:13.052 14:08:07 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:13.052 14:08:07 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:13.052 14:08:07 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:13.052 14:08:07 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:13.052 14:08:07 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:13.052 14:08:07 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:13.052 14:08:07 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:13.052 14:08:07 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:13.052 14:08:07 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:13.052 14:08:07 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:13.052 14:08:07 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:13.052 14:08:07 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:13.052 14:08:07 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:13.052 14:08:07 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:13.052 14:08:07 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:13.052 14:08:07 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:13.052 14:08:07 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:13.052 14:08:07 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:13.052 14:08:07 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:13.052 14:08:07 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ahxgZsz38z 00:29:13.052 14:08:07 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:13.052 14:08:07 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:13.052 14:08:07 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:13.052 14:08:07 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:13.052 14:08:07 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:13.052 14:08:07 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:13.052 14:08:07 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:13.052 14:08:07 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ahxgZsz38z 00:29:13.052 14:08:07 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ahxgZsz38z 00:29:13.052 14:08:07 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ahxgZsz38z 00:29:13.052 14:08:07 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:13.052 14:08:07 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:13.052 14:08:07 keyring_file -- keyring/common.sh@17 -- # name=key1 00:29:13.052 14:08:07 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:13.052 14:08:07 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:13.052 14:08:07 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:13.052 14:08:07 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.294NXW2zbS 00:29:13.052 14:08:07 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:13.052 14:08:07 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:13.052 14:08:07 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:13.052 14:08:07 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:13.052 14:08:07 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:13.052 14:08:07 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:13.052 14:08:07 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:13.052 14:08:07 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.294NXW2zbS 00:29:13.052 14:08:07 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.294NXW2zbS 00:29:13.052 14:08:07 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.294NXW2zbS 00:29:13.052 14:08:07 keyring_file -- keyring/file.sh@30 -- # tgtpid=3888655 00:29:13.052 14:08:07 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:13.052 14:08:07 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3888655 00:29:13.052 14:08:07 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3888655 ']' 00:29:13.052 14:08:07 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.052 14:08:07 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:13.052 14:08:07 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.052 14:08:07 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:13.052 14:08:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:13.052 [2024-07-15 14:08:07.845589] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:29:13.052 [2024-07-15 14:08:07.845672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3888655 ] 00:29:13.052 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.310 [2024-07-15 14:08:07.904268] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.310 [2024-07-15 14:08:08.015265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.568 14:08:08 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:13.568 14:08:08 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:29:13.568 14:08:08 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:29:13.568 14:08:08 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.568 14:08:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:13.568 [2024-07-15 14:08:08.245144] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:13.568 null0 00:29:13.568 [2024-07-15 14:08:08.277189] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:13.568 [2024-07-15 14:08:08.277581] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:13.568 [2024-07-15 14:08:08.285192] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:13.568 14:08:08 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.568 14:08:08 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:13.568 14:08:08 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:13.568 14:08:08 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:13.568 14:08:08 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:13.568 14:08:08 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:13.568 14:08:08 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:13.568 14:08:08 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:13.568 14:08:08 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:13.568 14:08:08 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.568 14:08:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:13.568 [2024-07-15 14:08:08.293232] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:29:13.568 request: 00:29:13.568 { 00:29:13.568 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:13.568 "secure_channel": false, 00:29:13.568 "listen_address": { 00:29:13.568 "trtype": "tcp", 00:29:13.568 "traddr": "127.0.0.1", 00:29:13.568 "trsvcid": "4420" 00:29:13.568 }, 00:29:13.568 "method": "nvmf_subsystem_add_listener", 00:29:13.568 "req_id": 1 00:29:13.568 } 00:29:13.568 Got JSON-RPC error response 00:29:13.568 response: 00:29:13.568 { 00:29:13.568 "code": -32602, 00:29:13.568 "message": "Invalid parameters" 00:29:13.568 } 00:29:13.568 14:08:08 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:13.568 14:08:08 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:13.568 14:08:08 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:13.568 14:08:08 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:13.568 14:08:08 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:13.568 14:08:08 keyring_file -- keyring/file.sh@46 -- # bperfpid=3888667 00:29:13.568 14:08:08 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:13.568 14:08:08 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3888667 /var/tmp/bperf.sock 00:29:13.568 14:08:08 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3888667 ']' 00:29:13.568 14:08:08 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:13.568 14:08:08 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:13.568 14:08:08 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:13.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:13.568 14:08:08 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:13.569 14:08:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:13.569 [2024-07-15 14:08:08.338463] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:29:13.569 [2024-07-15 14:08:08.338529] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3888667 ] 00:29:13.569 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.569 [2024-07-15 14:08:08.395132] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.826 [2024-07-15 14:08:08.501624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.826 14:08:08 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:13.826 14:08:08 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:29:13.826 14:08:08 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ahxgZsz38z 00:29:13.826 14:08:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ahxgZsz38z 00:29:14.084 14:08:08 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.294NXW2zbS 00:29:14.084 14:08:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.294NXW2zbS 00:29:14.341 14:08:09 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:29:14.341 14:08:09 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:29:14.341 14:08:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:14.341 14:08:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:14.341 14:08:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:14.598 14:08:09 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.ahxgZsz38z == \/\t\m\p\/\t\m\p\.\a\h\x\g\Z\s\z\3\8\z ]] 00:29:14.598 14:08:09 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:29:14.598 14:08:09 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:29:14.598 14:08:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:14.598 14:08:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:14.598 14:08:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:14.856 14:08:09 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.294NXW2zbS == \/\t\m\p\/\t\m\p\.\2\9\4\N\X\W\2\z\b\S ]] 00:29:14.856 14:08:09 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:29:14.856 14:08:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:14.856 14:08:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:14.856 14:08:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:14.856 14:08:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:14.856 14:08:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:15.113 14:08:09 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:15.113 14:08:09 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:29:15.113 14:08:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:15.113 14:08:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:15.113 14:08:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:15.113 14:08:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:15.113 14:08:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:15.371 14:08:10 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:15.371 14:08:10 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:15.371 14:08:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:15.629 [2024-07-15 14:08:10.323302] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:15.629 nvme0n1 00:29:15.629 14:08:10 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:29:15.629 14:08:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:15.629 14:08:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:15.629 14:08:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:15.629 14:08:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:15.629 14:08:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:15.886 14:08:10 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:15.886 14:08:10 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:29:15.886 14:08:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:15.886 14:08:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:15.886 14:08:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:15.886 14:08:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:15.886 14:08:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:16.143 14:08:10 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:16.143 14:08:10 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:16.399 Running I/O for 1 seconds... 00:29:17.332 00:29:17.332 Latency(us) 00:29:17.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.332 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:17.332 nvme0n1 : 1.01 9057.22 35.38 0.00 0.00 14073.89 4733.16 21748.24 00:29:17.332 =================================================================================================================== 00:29:17.332 Total : 9057.22 35.38 0.00 0.00 14073.89 4733.16 21748.24 00:29:17.332 0 00:29:17.332 14:08:12 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:17.332 14:08:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:17.589 14:08:12 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:29:17.589 14:08:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:17.589 14:08:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:17.589 14:08:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:17.589 14:08:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:17.589 14:08:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:17.847 14:08:12 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:17.847 14:08:12 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:29:17.847 14:08:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:17.847 14:08:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:17.847 14:08:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:17.847 14:08:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:17.847 14:08:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:18.104 14:08:12 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:18.104 14:08:12 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:18.104 14:08:12 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:18.104 14:08:12 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:18.104 14:08:12 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:18.104 14:08:12 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:18.104 14:08:12 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:18.104 14:08:12 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:18.104 14:08:12 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:18.104 14:08:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:18.361 [2024-07-15 14:08:13.015231] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:18.361 [2024-07-15 14:08:13.015965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190ebd0 (107): Transport endpoint is not connected 00:29:18.361 [2024-07-15 14:08:13.016959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190ebd0 (9): Bad file descriptor 00:29:18.361 [2024-07-15 14:08:13.017958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:18.361 [2024-07-15 14:08:13.017976] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:18.361 [2024-07-15 14:08:13.017989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:18.361 request: 00:29:18.361 { 00:29:18.361 "name": "nvme0", 00:29:18.361 "trtype": "tcp", 00:29:18.361 "traddr": "127.0.0.1", 00:29:18.361 "adrfam": "ipv4", 00:29:18.361 "trsvcid": "4420", 00:29:18.361 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:18.361 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:18.361 "prchk_reftag": false, 00:29:18.361 "prchk_guard": false, 00:29:18.361 "hdgst": false, 00:29:18.361 "ddgst": false, 00:29:18.361 "psk": "key1", 00:29:18.361 "method": "bdev_nvme_attach_controller", 00:29:18.361 "req_id": 1 00:29:18.361 } 00:29:18.361 Got JSON-RPC error response 00:29:18.361 response: 00:29:18.361 { 00:29:18.361 "code": -5, 00:29:18.361 "message": "Input/output error" 00:29:18.361 } 00:29:18.361 14:08:13 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:18.361 14:08:13 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:18.361 14:08:13 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:18.361 14:08:13 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:18.361 14:08:13 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:29:18.361 14:08:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:18.361 14:08:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:18.361 14:08:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:18.361 14:08:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:18.361 14:08:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:18.620 14:08:13 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:18.620 14:08:13 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:29:18.620 14:08:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:18.620 14:08:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:18.620 14:08:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:18.620 14:08:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:18.621 14:08:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:18.880 14:08:13 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:18.880 14:08:13 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:18.880 14:08:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:19.137 14:08:13 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:19.137 14:08:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:19.392 14:08:14 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:19.392 14:08:14 keyring_file -- keyring/file.sh@77 -- # jq length 00:29:19.392 14:08:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:19.648 14:08:14 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:19.648 14:08:14 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.ahxgZsz38z 00:29:19.648 14:08:14 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ahxgZsz38z 00:29:19.648 14:08:14 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:19.648 14:08:14 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ahxgZsz38z 00:29:19.648 14:08:14 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:19.648 14:08:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:19.648 14:08:14 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:19.648 14:08:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:19.648 14:08:14 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ahxgZsz38z 00:29:19.648 14:08:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ahxgZsz38z 00:29:19.906 [2024-07-15 14:08:14.506004] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ahxgZsz38z': 0100660 00:29:19.906 [2024-07-15 14:08:14.506058] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:19.906 request: 00:29:19.906 { 00:29:19.906 "name": "key0", 00:29:19.906 "path": "/tmp/tmp.ahxgZsz38z", 00:29:19.906 "method": "keyring_file_add_key", 00:29:19.906 "req_id": 1 00:29:19.906 } 00:29:19.906 Got JSON-RPC error response 00:29:19.906 response: 00:29:19.906 { 00:29:19.906 "code": -1, 00:29:19.906 "message": "Operation not permitted" 00:29:19.906 } 00:29:19.906 14:08:14 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:19.906 14:08:14 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:19.906 14:08:14 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:19.906 14:08:14 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:19.906 14:08:14 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.ahxgZsz38z 00:29:19.906 14:08:14 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ahxgZsz38z 00:29:19.906 14:08:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ahxgZsz38z 00:29:20.162 14:08:14 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.ahxgZsz38z 00:29:20.162 14:08:14 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:29:20.162 14:08:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:20.162 14:08:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:20.162 14:08:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:20.162 14:08:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:20.162 14:08:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:20.419 14:08:15 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:20.419 14:08:15 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:20.419 14:08:15 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:20.419 14:08:15 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:20.419 14:08:15 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:20.419 14:08:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:20.419 14:08:15 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:20.419 14:08:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:20.419 14:08:15 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:20.419 14:08:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:20.419 [2024-07-15 14:08:15.252028] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ahxgZsz38z': No such file or directory 00:29:20.419 [2024-07-15 14:08:15.252075] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:20.419 [2024-07-15 14:08:15.252099] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:20.419 [2024-07-15 14:08:15.252125] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:20.419 [2024-07-15 14:08:15.252137] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:20.419 request: 00:29:20.419 { 00:29:20.419 "name": "nvme0", 00:29:20.419 "trtype": "tcp", 00:29:20.419 "traddr": "127.0.0.1", 00:29:20.419 "adrfam": "ipv4", 00:29:20.419 "trsvcid": "4420", 00:29:20.419 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:20.419 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:20.419 "prchk_reftag": false, 00:29:20.419 "prchk_guard": false, 00:29:20.419 "hdgst": false, 00:29:20.419 "ddgst": false, 00:29:20.419 "psk": "key0", 00:29:20.419 "method": "bdev_nvme_attach_controller", 00:29:20.419 "req_id": 1 00:29:20.419 } 00:29:20.419 Got JSON-RPC error response 00:29:20.419 response: 00:29:20.419 { 00:29:20.419 "code": -19, 00:29:20.419 "message": "No such device" 00:29:20.419 } 00:29:20.675 14:08:15 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:20.675 14:08:15 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:20.675 14:08:15 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:20.675 14:08:15 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:20.675 14:08:15 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:20.675 14:08:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:20.931 14:08:15 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:20.931 14:08:15 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:20.931 14:08:15 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:20.931 14:08:15 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:20.931 14:08:15 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:20.931 14:08:15 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:20.931 14:08:15 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FYL7Ai1yMf 00:29:20.931 14:08:15 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:20.931 14:08:15 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:20.931 14:08:15 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:20.931 14:08:15 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:20.931 14:08:15 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:20.931 14:08:15 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:20.931 14:08:15 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:20.931 14:08:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FYL7Ai1yMf 00:29:20.931 14:08:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FYL7Ai1yMf 00:29:20.931 14:08:15 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.FYL7Ai1yMf 00:29:20.931 14:08:15 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FYL7Ai1yMf 00:29:20.931 14:08:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FYL7Ai1yMf 00:29:21.187 14:08:15 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:21.187 14:08:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:21.444 nvme0n1 00:29:21.444 14:08:16 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:29:21.444 14:08:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:21.444 14:08:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:21.444 14:08:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:21.444 14:08:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:21.444 14:08:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:21.700 14:08:16 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:29:21.700 14:08:16 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:29:21.700 14:08:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:21.957 14:08:16 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:29:21.957 14:08:16 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:29:21.957 14:08:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:21.957 14:08:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:21.957 14:08:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:22.214 14:08:16 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:22.214 14:08:16 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:29:22.214 14:08:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:22.214 14:08:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:22.214 14:08:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:22.214 14:08:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:22.214 14:08:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:22.472 14:08:17 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:22.472 14:08:17 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:22.472 14:08:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:22.730 14:08:17 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:22.730 14:08:17 keyring_file -- keyring/file.sh@104 -- # jq length 00:29:22.730 14:08:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:22.988 14:08:17 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:22.988 14:08:17 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FYL7Ai1yMf 00:29:22.988 14:08:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FYL7Ai1yMf 00:29:23.245 14:08:17 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.294NXW2zbS 00:29:23.245 14:08:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.294NXW2zbS 00:29:23.502 14:08:18 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:23.502 14:08:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:23.760 nvme0n1 00:29:23.760 14:08:18 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:23.760 14:08:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:24.018 14:08:18 keyring_file -- keyring/file.sh@112 -- # config='{ 00:29:24.018 "subsystems": [ 00:29:24.018 { 00:29:24.018 "subsystem": "keyring", 00:29:24.018 "config": [ 00:29:24.018 { 00:29:24.018 "method": "keyring_file_add_key", 00:29:24.018 "params": { 00:29:24.018 "name": "key0", 00:29:24.018 "path": "/tmp/tmp.FYL7Ai1yMf" 00:29:24.018 } 00:29:24.018 }, 00:29:24.018 { 00:29:24.018 "method": "keyring_file_add_key", 00:29:24.018 "params": { 00:29:24.018 "name": "key1", 00:29:24.018 "path": "/tmp/tmp.294NXW2zbS" 00:29:24.018 } 00:29:24.018 } 00:29:24.018 ] 00:29:24.018 }, 00:29:24.018 { 00:29:24.018 "subsystem": "iobuf", 00:29:24.018 "config": [ 00:29:24.018 { 00:29:24.018 "method": "iobuf_set_options", 00:29:24.018 "params": { 00:29:24.018 "small_pool_count": 8192, 00:29:24.018 "large_pool_count": 1024, 00:29:24.018 "small_bufsize": 8192, 00:29:24.018 "large_bufsize": 135168 00:29:24.018 } 00:29:24.018 } 00:29:24.018 ] 00:29:24.018 }, 00:29:24.018 { 00:29:24.018 "subsystem": "sock", 00:29:24.018 "config": [ 00:29:24.018 { 00:29:24.018 "method": "sock_set_default_impl", 00:29:24.018 "params": { 00:29:24.018 "impl_name": "posix" 00:29:24.018 } 00:29:24.018 }, 00:29:24.018 { 00:29:24.018 "method": "sock_impl_set_options", 00:29:24.018 "params": { 00:29:24.018 "impl_name": "ssl", 00:29:24.018 "recv_buf_size": 4096, 00:29:24.018 "send_buf_size": 4096, 00:29:24.018 "enable_recv_pipe": true, 00:29:24.018 "enable_quickack": false, 00:29:24.018 "enable_placement_id": 0, 00:29:24.018 "enable_zerocopy_send_server": true, 00:29:24.018 "enable_zerocopy_send_client": false, 00:29:24.018 "zerocopy_threshold": 0, 00:29:24.018 "tls_version": 0, 00:29:24.018 "enable_ktls": false 00:29:24.018 } 00:29:24.018 }, 00:29:24.018 { 00:29:24.018 "method": "sock_impl_set_options", 00:29:24.018 "params": { 00:29:24.018 "impl_name": "posix", 00:29:24.018 "recv_buf_size": 2097152, 00:29:24.018 "send_buf_size": 2097152, 00:29:24.018 "enable_recv_pipe": true, 00:29:24.018 "enable_quickack": false, 00:29:24.018 "enable_placement_id": 0, 00:29:24.018 "enable_zerocopy_send_server": true, 00:29:24.018 "enable_zerocopy_send_client": false, 00:29:24.018 "zerocopy_threshold": 0, 00:29:24.018 "tls_version": 0, 00:29:24.018 "enable_ktls": false 00:29:24.018 } 00:29:24.018 } 00:29:24.018 ] 00:29:24.018 }, 00:29:24.018 { 00:29:24.018 "subsystem": "vmd", 00:29:24.018 "config": [] 00:29:24.018 }, 00:29:24.018 { 00:29:24.018 "subsystem": "accel", 00:29:24.018 "config": [ 00:29:24.018 { 00:29:24.018 "method": "accel_set_options", 00:29:24.018 "params": { 00:29:24.018 "small_cache_size": 128, 00:29:24.018 "large_cache_size": 16, 00:29:24.018 "task_count": 2048, 00:29:24.018 "sequence_count": 2048, 00:29:24.018 "buf_count": 2048 00:29:24.018 } 00:29:24.018 } 00:29:24.018 ] 00:29:24.018 }, 00:29:24.018 { 00:29:24.018 "subsystem": "bdev", 00:29:24.018 "config": [ 00:29:24.018 { 00:29:24.018 "method": "bdev_set_options", 00:29:24.018 "params": { 00:29:24.018 "bdev_io_pool_size": 65535, 00:29:24.018 "bdev_io_cache_size": 256, 00:29:24.018 "bdev_auto_examine": true, 00:29:24.018 "iobuf_small_cache_size": 128, 00:29:24.018 "iobuf_large_cache_size": 16 00:29:24.018 } 00:29:24.018 }, 00:29:24.018 { 00:29:24.018 "method": "bdev_raid_set_options", 00:29:24.018 "params": { 00:29:24.018 "process_window_size_kb": 1024 00:29:24.018 } 00:29:24.018 }, 00:29:24.018 { 00:29:24.018 "method": "bdev_iscsi_set_options", 00:29:24.018 "params": { 00:29:24.018 "timeout_sec": 30 00:29:24.018 } 00:29:24.018 }, 00:29:24.018 { 00:29:24.018 "method": "bdev_nvme_set_options", 00:29:24.018 "params": { 00:29:24.018 "action_on_timeout": "none", 00:29:24.018 "timeout_us": 0, 00:29:24.018 "timeout_admin_us": 0, 00:29:24.018 "keep_alive_timeout_ms": 10000, 00:29:24.018 "arbitration_burst": 0, 00:29:24.018 "low_priority_weight": 0, 00:29:24.018 "medium_priority_weight": 0, 00:29:24.018 "high_priority_weight": 0, 00:29:24.018 "nvme_adminq_poll_period_us": 10000, 00:29:24.018 "nvme_ioq_poll_period_us": 0, 00:29:24.018 "io_queue_requests": 512, 00:29:24.018 "delay_cmd_submit": true, 00:29:24.018 "transport_retry_count": 4, 00:29:24.018 "bdev_retry_count": 3, 00:29:24.018 "transport_ack_timeout": 0, 00:29:24.018 "ctrlr_loss_timeout_sec": 0, 00:29:24.018 "reconnect_delay_sec": 0, 00:29:24.018 "fast_io_fail_timeout_sec": 0, 00:29:24.018 "disable_auto_failback": false, 00:29:24.018 "generate_uuids": false, 00:29:24.018 "transport_tos": 0, 00:29:24.018 "nvme_error_stat": false, 00:29:24.018 "rdma_srq_size": 0, 00:29:24.018 "io_path_stat": false, 00:29:24.018 "allow_accel_sequence": false, 00:29:24.018 "rdma_max_cq_size": 0, 00:29:24.018 "rdma_cm_event_timeout_ms": 0, 00:29:24.018 "dhchap_digests": [ 00:29:24.018 "sha256", 00:29:24.018 "sha384", 00:29:24.018 "sha512" 00:29:24.018 ], 00:29:24.018 "dhchap_dhgroups": [ 00:29:24.018 "null", 00:29:24.018 "ffdhe2048", 00:29:24.018 "ffdhe3072", 00:29:24.018 "ffdhe4096", 00:29:24.018 "ffdhe6144", 00:29:24.018 "ffdhe8192" 00:29:24.018 ] 00:29:24.018 } 00:29:24.018 }, 00:29:24.018 { 00:29:24.018 "method": "bdev_nvme_attach_controller", 00:29:24.018 "params": { 00:29:24.018 "name": "nvme0", 00:29:24.018 "trtype": "TCP", 00:29:24.018 "adrfam": "IPv4", 00:29:24.018 "traddr": "127.0.0.1", 00:29:24.018 "trsvcid": "4420", 00:29:24.018 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:24.018 "prchk_reftag": false, 00:29:24.018 "prchk_guard": false, 00:29:24.018 "ctrlr_loss_timeout_sec": 0, 00:29:24.018 "reconnect_delay_sec": 0, 00:29:24.018 "fast_io_fail_timeout_sec": 0, 00:29:24.018 "psk": "key0", 00:29:24.018 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:24.018 "hdgst": false, 00:29:24.018 "ddgst": false 00:29:24.018 } 00:29:24.018 }, 00:29:24.018 { 00:29:24.018 "method": "bdev_nvme_set_hotplug", 00:29:24.018 "params": { 00:29:24.018 "period_us": 100000, 00:29:24.018 "enable": false 00:29:24.018 } 00:29:24.018 }, 00:29:24.018 { 00:29:24.018 "method": "bdev_wait_for_examine" 00:29:24.018 } 00:29:24.018 ] 00:29:24.018 }, 00:29:24.018 { 00:29:24.018 "subsystem": "nbd", 00:29:24.018 "config": [] 00:29:24.018 } 00:29:24.018 ] 00:29:24.018 }' 00:29:24.018 14:08:18 keyring_file -- keyring/file.sh@114 -- # killprocess 3888667 00:29:24.018 14:08:18 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3888667 ']' 00:29:24.018 14:08:18 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3888667 00:29:24.018 14:08:18 keyring_file -- common/autotest_common.sh@953 -- # uname 00:29:24.018 14:08:18 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:24.018 14:08:18 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3888667 00:29:24.018 14:08:18 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:24.018 14:08:18 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:24.018 14:08:18 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3888667' 00:29:24.018 killing process with pid 3888667 00:29:24.018 14:08:18 keyring_file -- common/autotest_common.sh@967 -- # kill 3888667 00:29:24.018 Received shutdown signal, test time was about 1.000000 seconds 00:29:24.018 00:29:24.018 Latency(us) 00:29:24.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.018 =================================================================================================================== 00:29:24.018 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:24.018 14:08:18 keyring_file -- common/autotest_common.sh@972 -- # wait 3888667 00:29:24.275 14:08:19 keyring_file -- keyring/file.sh@117 -- # bperfpid=3890124 00:29:24.275 14:08:19 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3890124 /var/tmp/bperf.sock 00:29:24.275 14:08:19 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3890124 ']' 00:29:24.275 14:08:19 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:24.275 14:08:19 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:24.275 14:08:19 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:24.275 14:08:19 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:24.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:24.275 14:08:19 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:29:24.275 "subsystems": [ 00:29:24.275 { 00:29:24.275 "subsystem": "keyring", 00:29:24.275 "config": [ 00:29:24.275 { 00:29:24.275 "method": "keyring_file_add_key", 00:29:24.275 "params": { 00:29:24.275 "name": "key0", 00:29:24.275 "path": "/tmp/tmp.FYL7Ai1yMf" 00:29:24.275 } 00:29:24.275 }, 00:29:24.275 { 00:29:24.275 "method": "keyring_file_add_key", 00:29:24.275 "params": { 00:29:24.275 "name": "key1", 00:29:24.275 "path": "/tmp/tmp.294NXW2zbS" 00:29:24.275 } 00:29:24.275 } 00:29:24.275 ] 00:29:24.275 }, 00:29:24.275 { 00:29:24.275 "subsystem": "iobuf", 00:29:24.275 "config": [ 00:29:24.275 { 00:29:24.275 "method": "iobuf_set_options", 00:29:24.275 "params": { 00:29:24.275 "small_pool_count": 8192, 00:29:24.275 "large_pool_count": 1024, 00:29:24.275 "small_bufsize": 8192, 00:29:24.275 "large_bufsize": 135168 00:29:24.275 } 00:29:24.275 } 00:29:24.275 ] 00:29:24.275 }, 00:29:24.275 { 00:29:24.275 "subsystem": "sock", 00:29:24.275 "config": [ 00:29:24.275 { 00:29:24.275 "method": "sock_set_default_impl", 00:29:24.275 "params": { 00:29:24.275 "impl_name": "posix" 00:29:24.275 } 00:29:24.275 }, 00:29:24.275 { 00:29:24.275 "method": "sock_impl_set_options", 00:29:24.275 "params": { 00:29:24.275 "impl_name": "ssl", 00:29:24.275 "recv_buf_size": 4096, 00:29:24.275 "send_buf_size": 4096, 00:29:24.275 "enable_recv_pipe": true, 00:29:24.275 "enable_quickack": false, 00:29:24.275 "enable_placement_id": 0, 00:29:24.275 "enable_zerocopy_send_server": true, 00:29:24.275 "enable_zerocopy_send_client": false, 00:29:24.275 "zerocopy_threshold": 0, 00:29:24.275 "tls_version": 0, 00:29:24.275 "enable_ktls": false 00:29:24.275 } 00:29:24.275 }, 00:29:24.275 { 00:29:24.275 "method": "sock_impl_set_options", 00:29:24.275 "params": { 00:29:24.275 "impl_name": "posix", 00:29:24.275 "recv_buf_size": 2097152, 00:29:24.275 "send_buf_size": 2097152, 00:29:24.275 "enable_recv_pipe": true, 00:29:24.275 "enable_quickack": false, 00:29:24.275 "enable_placement_id": 0, 00:29:24.276 "enable_zerocopy_send_server": true, 00:29:24.276 "enable_zerocopy_send_client": false, 00:29:24.276 "zerocopy_threshold": 0, 00:29:24.276 "tls_version": 0, 00:29:24.276 "enable_ktls": false 00:29:24.276 } 00:29:24.276 } 00:29:24.276 ] 00:29:24.276 }, 00:29:24.276 { 00:29:24.276 "subsystem": "vmd", 00:29:24.276 "config": [] 00:29:24.276 }, 00:29:24.276 { 00:29:24.276 "subsystem": "accel", 00:29:24.276 "config": [ 00:29:24.276 { 00:29:24.276 "method": "accel_set_options", 00:29:24.276 "params": { 00:29:24.276 "small_cache_size": 128, 00:29:24.276 "large_cache_size": 16, 00:29:24.276 "task_count": 2048, 00:29:24.276 "sequence_count": 2048, 00:29:24.276 "buf_count": 2048 00:29:24.276 } 00:29:24.276 } 00:29:24.276 ] 00:29:24.276 }, 00:29:24.276 { 00:29:24.276 "subsystem": "bdev", 00:29:24.276 "config": [ 00:29:24.276 { 00:29:24.276 "method": "bdev_set_options", 00:29:24.276 "params": { 00:29:24.276 "bdev_io_pool_size": 65535, 00:29:24.276 "bdev_io_cache_size": 256, 00:29:24.276 "bdev_auto_examine": true, 00:29:24.276 "iobuf_small_cache_size": 128, 00:29:24.276 "iobuf_large_cache_size": 16 00:29:24.276 } 00:29:24.276 }, 00:29:24.276 { 00:29:24.276 "method": "bdev_raid_set_options", 00:29:24.276 "params": { 00:29:24.276 "process_window_size_kb": 1024 00:29:24.276 } 00:29:24.276 }, 00:29:24.276 { 00:29:24.276 "method": "bdev_iscsi_set_options", 00:29:24.276 "params": { 00:29:24.276 "timeout_sec": 30 00:29:24.276 } 00:29:24.276 }, 00:29:24.276 { 00:29:24.276 "method": "bdev_nvme_set_options", 00:29:24.276 "params": { 00:29:24.276 "action_on_timeout": "none", 00:29:24.276 "timeout_us": 0, 00:29:24.276 "timeout_admin_us": 0, 00:29:24.276 "keep_alive_timeout_ms": 10000, 00:29:24.276 "arbitration_burst": 0, 00:29:24.276 "low_priority_weight": 0, 00:29:24.276 "medium_priority_weight": 0, 00:29:24.276 "high_priority_weight": 0, 00:29:24.276 "nvme_adminq_poll_period_us": 10000, 00:29:24.276 "nvme_ioq_poll_period_us": 0, 00:29:24.276 "io_queue_requests": 512, 00:29:24.276 "delay_cmd_submit": true, 00:29:24.276 "transport_retry_count": 4, 00:29:24.276 "bdev_retry_count": 3, 00:29:24.276 "transport_ack_timeout": 0, 00:29:24.276 "ctrlr_loss_timeout_sec": 0, 00:29:24.276 "reconnect_delay_sec": 0, 00:29:24.276 "fast_io_fail_timeout_sec": 0, 00:29:24.276 "disable_auto_failback": false, 00:29:24.276 "generate_uuids": false, 00:29:24.276 "transport_tos": 0, 00:29:24.276 "nvme_error_stat": false, 00:29:24.276 "rdma_srq_size": 0, 00:29:24.276 "io_path_stat": false, 00:29:24.276 "allow_accel_sequence": false, 00:29:24.276 "rdma_max_cq_size": 0, 00:29:24.276 "rdma_cm_event_timeout_ms": 0, 00:29:24.276 "dhchap_digests": [ 00:29:24.276 "sha256", 00:29:24.276 "sha384", 00:29:24.276 "sha512" 00:29:24.276 ], 00:29:24.276 "dhchap_dhgroups": [ 00:29:24.276 "null", 00:29:24.276 "ffdhe2048", 00:29:24.276 "ffdhe3072", 00:29:24.276 "ffdhe4096", 00:29:24.276 "ffdhe6144", 00:29:24.276 "ffdhe8192" 00:29:24.276 ] 00:29:24.276 } 00:29:24.276 }, 00:29:24.276 { 00:29:24.276 "method": "bdev_nvme_attach_controller", 00:29:24.276 "params": { 00:29:24.276 "name": "nvme0", 00:29:24.276 "trtype": "TCP", 00:29:24.276 "adrfam": "IPv4", 00:29:24.276 "traddr": "127.0.0.1", 00:29:24.276 "trsvcid": "4420", 00:29:24.276 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:24.276 "prchk_reftag": false, 00:29:24.276 "prchk_guard": false, 00:29:24.276 "ctrlr_loss_timeout_sec": 0, 00:29:24.276 "reconnect_delay_sec": 0, 00:29:24.276 "fast_io_fail_timeout_sec": 0, 00:29:24.276 "psk": "key0", 00:29:24.276 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:24.276 "hdgst": false, 00:29:24.276 "ddgst": false 00:29:24.276 } 00:29:24.276 }, 00:29:24.276 { 00:29:24.276 "method": "bdev_nvme_set_hotplug", 00:29:24.276 "params": { 00:29:24.276 "period_us": 100000, 00:29:24.276 "enable": false 00:29:24.276 } 00:29:24.276 }, 00:29:24.276 { 00:29:24.276 "method": "bdev_wait_for_examine" 00:29:24.276 } 00:29:24.276 ] 00:29:24.276 }, 00:29:24.276 { 00:29:24.276 "subsystem": "nbd", 00:29:24.276 "config": [] 00:29:24.276 } 00:29:24.276 ] 00:29:24.276 }' 00:29:24.276 14:08:19 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:24.276 14:08:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:24.276 [2024-07-15 14:08:19.077513] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:29:24.276 [2024-07-15 14:08:19.077591] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3890124 ] 00:29:24.276 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.533 [2024-07-15 14:08:19.135626] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.533 [2024-07-15 14:08:19.239061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.790 [2024-07-15 14:08:19.427141] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:25.355 14:08:20 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:25.355 14:08:20 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:29:25.355 14:08:20 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:25.355 14:08:20 keyring_file -- keyring/file.sh@120 -- # jq length 00:29:25.355 14:08:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:25.613 14:08:20 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:25.613 14:08:20 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:29:25.613 14:08:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:25.613 14:08:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:25.613 14:08:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:25.613 14:08:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:25.613 14:08:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:25.870 14:08:20 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:25.870 14:08:20 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:29:25.870 14:08:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:25.870 14:08:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:25.870 14:08:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:25.870 14:08:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:25.870 14:08:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:26.127 14:08:20 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:26.127 14:08:20 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:26.127 14:08:20 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:26.127 14:08:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:26.384 14:08:21 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:26.384 14:08:21 keyring_file -- keyring/file.sh@1 -- # cleanup 00:29:26.384 14:08:21 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.FYL7Ai1yMf /tmp/tmp.294NXW2zbS 00:29:26.384 14:08:21 keyring_file -- keyring/file.sh@20 -- # killprocess 3890124 00:29:26.384 14:08:21 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3890124 ']' 00:29:26.384 14:08:21 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3890124 00:29:26.384 14:08:21 keyring_file -- common/autotest_common.sh@953 -- # uname 00:29:26.384 14:08:21 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:26.384 14:08:21 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3890124 00:29:26.384 14:08:21 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:26.384 14:08:21 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:26.384 14:08:21 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3890124' 00:29:26.384 killing process with pid 3890124 00:29:26.384 14:08:21 keyring_file -- common/autotest_common.sh@967 -- # kill 3890124 00:29:26.384 Received shutdown signal, test time was about 1.000000 seconds 00:29:26.384 00:29:26.384 Latency(us) 00:29:26.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.384 =================================================================================================================== 00:29:26.384 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:26.384 14:08:21 keyring_file -- common/autotest_common.sh@972 -- # wait 3890124 00:29:26.641 14:08:21 keyring_file -- keyring/file.sh@21 -- # killprocess 3888655 00:29:26.641 14:08:21 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3888655 ']' 00:29:26.641 14:08:21 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3888655 00:29:26.641 14:08:21 keyring_file -- common/autotest_common.sh@953 -- # uname 00:29:26.641 14:08:21 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:26.641 14:08:21 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3888655 00:29:26.641 14:08:21 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:26.641 14:08:21 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:26.641 14:08:21 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3888655' 00:29:26.641 killing process with pid 3888655 00:29:26.641 14:08:21 keyring_file -- common/autotest_common.sh@967 -- # kill 3888655 00:29:26.641 [2024-07-15 14:08:21.332711] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:26.641 14:08:21 keyring_file -- common/autotest_common.sh@972 -- # wait 3888655 00:29:27.206 00:29:27.206 real 0m14.153s 00:29:27.206 user 0m35.263s 00:29:27.206 sys 0m3.259s 00:29:27.206 14:08:21 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:27.206 14:08:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:27.206 ************************************ 00:29:27.206 END TEST keyring_file 00:29:27.206 ************************************ 00:29:27.206 14:08:21 -- common/autotest_common.sh@1142 -- # return 0 00:29:27.206 14:08:21 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:29:27.206 14:08:21 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:29:27.206 14:08:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:27.206 14:08:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:27.206 14:08:21 -- common/autotest_common.sh@10 -- # set +x 00:29:27.206 ************************************ 00:29:27.206 START TEST keyring_linux 00:29:27.206 ************************************ 00:29:27.206 14:08:21 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:29:27.206 * Looking for test storage... 00:29:27.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:27.206 14:08:21 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:27.206 14:08:21 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:27.206 14:08:21 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:27.206 14:08:21 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:27.206 14:08:21 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:27.206 14:08:21 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.206 14:08:21 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.206 14:08:21 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.206 14:08:21 keyring_linux -- paths/export.sh@5 -- # export PATH 00:29:27.206 14:08:21 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:27.206 14:08:21 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:27.206 14:08:21 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:27.206 14:08:21 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:27.206 14:08:21 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:29:27.206 14:08:21 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:29:27.206 14:08:21 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:29:27.206 14:08:21 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:29:27.206 14:08:21 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:27.206 14:08:21 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:29:27.206 14:08:21 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:27.206 14:08:21 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:27.206 14:08:21 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:29:27.206 14:08:21 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:27.206 14:08:21 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:27.206 14:08:21 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:29:27.207 14:08:21 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:29:27.207 /tmp/:spdk-test:key0 00:29:27.207 14:08:21 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:29:27.207 14:08:21 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:27.207 14:08:21 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:29:27.207 14:08:21 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:27.207 14:08:21 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:27.207 14:08:21 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:29:27.207 14:08:21 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:27.207 14:08:21 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:27.207 14:08:21 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:27.207 14:08:21 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:27.207 14:08:21 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:27.207 14:08:21 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:27.207 14:08:21 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:27.207 14:08:21 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:29:27.207 14:08:21 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:29:27.207 /tmp/:spdk-test:key1 00:29:27.207 14:08:21 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3890491 00:29:27.207 14:08:21 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:27.207 14:08:21 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3890491 00:29:27.207 14:08:21 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3890491 ']' 00:29:27.207 14:08:21 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:27.207 14:08:21 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:27.207 14:08:21 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:27.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:27.207 14:08:21 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:27.207 14:08:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:27.207 [2024-07-15 14:08:22.009065] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:29:27.207 [2024-07-15 14:08:22.009161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3890491 ] 00:29:27.207 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.464 [2024-07-15 14:08:22.068065] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.464 [2024-07-15 14:08:22.172092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.722 14:08:22 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:27.722 14:08:22 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:29:27.722 14:08:22 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:29:27.722 14:08:22 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.722 14:08:22 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:27.722 [2024-07-15 14:08:22.405130] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:27.722 null0 00:29:27.722 [2024-07-15 14:08:22.437171] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:27.722 [2024-07-15 14:08:22.437612] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:27.722 14:08:22 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.722 14:08:22 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:29:27.722 456101213 00:29:27.722 14:08:22 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:29:27.722 815674294 00:29:27.722 14:08:22 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3890616 00:29:27.722 14:08:22 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:29:27.722 14:08:22 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3890616 /var/tmp/bperf.sock 00:29:27.722 14:08:22 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3890616 ']' 00:29:27.722 14:08:22 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:27.722 14:08:22 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:27.722 14:08:22 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:27.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:27.722 14:08:22 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:27.722 14:08:22 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:27.722 [2024-07-15 14:08:22.500483] Starting SPDK v24.09-pre git sha1 b124a6951 / DPDK 24.03.0 initialization... 00:29:27.722 [2024-07-15 14:08:22.500548] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3890616 ] 00:29:27.722 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.722 [2024-07-15 14:08:22.556778] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.980 [2024-07-15 14:08:22.662717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.980 14:08:22 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:27.980 14:08:22 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:29:27.980 14:08:22 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:29:27.980 14:08:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:29:28.237 14:08:22 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:29:28.237 14:08:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:28.495 14:08:23 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:28.495 14:08:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:28.752 [2024-07-15 14:08:23.535864] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:29.008 nvme0n1 00:29:29.008 14:08:23 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:29:29.008 14:08:23 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:29:29.008 14:08:23 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:29.008 14:08:23 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:29.008 14:08:23 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:29.008 14:08:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:29.266 14:08:23 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:29:29.266 14:08:23 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:29.266 14:08:23 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:29:29.266 14:08:23 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:29:29.266 14:08:23 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:29.266 14:08:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:29.266 14:08:23 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:29:29.523 14:08:24 keyring_linux -- keyring/linux.sh@25 -- # sn=456101213 00:29:29.523 14:08:24 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:29:29.523 14:08:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:29.523 14:08:24 keyring_linux -- keyring/linux.sh@26 -- # [[ 456101213 == \4\5\6\1\0\1\2\1\3 ]] 00:29:29.523 14:08:24 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 456101213 00:29:29.523 14:08:24 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:29:29.523 14:08:24 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:29.523 Running I/O for 1 seconds... 00:29:30.508 00:29:30.508 Latency(us) 00:29:30.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:30.508 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:30.508 nvme0n1 : 1.01 9190.44 35.90 0.00 0.00 13832.63 6213.78 20680.25 00:29:30.508 =================================================================================================================== 00:29:30.508 Total : 9190.44 35.90 0.00 0.00 13832.63 6213.78 20680.25 00:29:30.508 0 00:29:30.508 14:08:25 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:30.508 14:08:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:30.793 14:08:25 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:29:30.793 14:08:25 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:29:30.793 14:08:25 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:30.793 14:08:25 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:30.793 14:08:25 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:30.793 14:08:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:31.050 14:08:25 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:29:31.050 14:08:25 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:31.050 14:08:25 keyring_linux -- keyring/linux.sh@23 -- # return 00:29:31.050 14:08:25 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:31.050 14:08:25 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:29:31.050 14:08:25 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:31.050 14:08:25 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:31.050 14:08:25 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:31.050 14:08:25 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:31.050 14:08:25 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:31.050 14:08:25 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:31.050 14:08:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:31.307 [2024-07-15 14:08:26.002109] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:31.307 [2024-07-15 14:08:26.003063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6b780 (107): Transport endpoint is not connected 00:29:31.307 [2024-07-15 14:08:26.004055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd6b780 (9): Bad file descriptor 00:29:31.307 [2024-07-15 14:08:26.005054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:31.307 [2024-07-15 14:08:26.005071] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:31.307 [2024-07-15 14:08:26.005084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:31.307 request: 00:29:31.307 { 00:29:31.307 "name": "nvme0", 00:29:31.307 "trtype": "tcp", 00:29:31.307 "traddr": "127.0.0.1", 00:29:31.307 "adrfam": "ipv4", 00:29:31.307 "trsvcid": "4420", 00:29:31.307 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:31.307 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:31.307 "prchk_reftag": false, 00:29:31.307 "prchk_guard": false, 00:29:31.307 "hdgst": false, 00:29:31.307 "ddgst": false, 00:29:31.307 "psk": ":spdk-test:key1", 00:29:31.307 "method": "bdev_nvme_attach_controller", 00:29:31.307 "req_id": 1 00:29:31.307 } 00:29:31.307 Got JSON-RPC error response 00:29:31.307 response: 00:29:31.307 { 00:29:31.307 "code": -5, 00:29:31.307 "message": "Input/output error" 00:29:31.307 } 00:29:31.307 14:08:26 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:29:31.307 14:08:26 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:31.307 14:08:26 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:31.307 14:08:26 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:31.307 14:08:26 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:29:31.307 14:08:26 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:31.307 14:08:26 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:29:31.307 14:08:26 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:29:31.307 14:08:26 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:29:31.307 14:08:26 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:31.307 14:08:26 keyring_linux -- keyring/linux.sh@33 -- # sn=456101213 00:29:31.308 14:08:26 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 456101213 00:29:31.308 1 links removed 00:29:31.308 14:08:26 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:31.308 14:08:26 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:29:31.308 14:08:26 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:29:31.308 14:08:26 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:29:31.308 14:08:26 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:29:31.308 14:08:26 keyring_linux -- keyring/linux.sh@33 -- # sn=815674294 00:29:31.308 14:08:26 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 815674294 00:29:31.308 1 links removed 00:29:31.308 14:08:26 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3890616 00:29:31.308 14:08:26 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3890616 ']' 00:29:31.308 14:08:26 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3890616 00:29:31.308 14:08:26 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:29:31.308 14:08:26 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:31.308 14:08:26 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3890616 00:29:31.308 14:08:26 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:31.308 14:08:26 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:31.308 14:08:26 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3890616' 00:29:31.308 killing process with pid 3890616 00:29:31.308 14:08:26 keyring_linux -- common/autotest_common.sh@967 -- # kill 3890616 00:29:31.308 Received shutdown signal, test time was about 1.000000 seconds 00:29:31.308 00:29:31.308 Latency(us) 00:29:31.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.308 =================================================================================================================== 00:29:31.308 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:31.308 14:08:26 keyring_linux -- common/autotest_common.sh@972 -- # wait 3890616 00:29:31.565 14:08:26 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3890491 00:29:31.565 14:08:26 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3890491 ']' 00:29:31.565 14:08:26 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3890491 00:29:31.565 14:08:26 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:29:31.565 14:08:26 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:31.565 14:08:26 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3890491 00:29:31.565 14:08:26 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:31.565 14:08:26 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:31.565 14:08:26 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3890491' 00:29:31.565 killing process with pid 3890491 00:29:31.565 14:08:26 keyring_linux -- common/autotest_common.sh@967 -- # kill 3890491 00:29:31.565 14:08:26 keyring_linux -- common/autotest_common.sh@972 -- # wait 3890491 00:29:32.131 00:29:32.131 real 0m4.882s 00:29:32.131 user 0m9.429s 00:29:32.131 sys 0m1.568s 00:29:32.131 14:08:26 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:32.131 14:08:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:32.131 ************************************ 00:29:32.131 END TEST keyring_linux 00:29:32.131 ************************************ 00:29:32.131 14:08:26 -- common/autotest_common.sh@1142 -- # return 0 00:29:32.131 14:08:26 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:29:32.131 14:08:26 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:29:32.131 14:08:26 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:29:32.131 14:08:26 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:29:32.131 14:08:26 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:29:32.131 14:08:26 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:29:32.131 14:08:26 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:29:32.131 14:08:26 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:29:32.131 14:08:26 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:29:32.131 14:08:26 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:29:32.131 14:08:26 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:29:32.131 14:08:26 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:29:32.131 14:08:26 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:29:32.131 14:08:26 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:29:32.131 14:08:26 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:29:32.131 14:08:26 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:29:32.131 14:08:26 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:29:32.131 14:08:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:32.131 14:08:26 -- common/autotest_common.sh@10 -- # set +x 00:29:32.131 14:08:26 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:29:32.131 14:08:26 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:29:32.131 14:08:26 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:29:32.131 14:08:26 -- common/autotest_common.sh@10 -- # set +x 00:29:34.033 INFO: APP EXITING 00:29:34.033 INFO: killing all VMs 00:29:34.033 INFO: killing vhost app 00:29:34.033 INFO: EXIT DONE 00:29:34.968 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:29:34.968 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:29:34.968 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:29:34.968 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:29:34.968 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:29:34.968 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:29:34.968 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:29:34.968 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:29:34.968 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:29:34.968 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:29:35.226 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:29:35.226 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:29:35.226 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:29:35.226 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:29:35.226 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:29:35.226 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:29:35.226 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:29:36.600 Cleaning 00:29:36.600 Removing: /var/run/dpdk/spdk0/config 00:29:36.600 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:36.600 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:36.600 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:36.600 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:36.600 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:36.601 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:36.601 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:36.601 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:36.601 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:36.601 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:36.601 Removing: /var/run/dpdk/spdk1/config 00:29:36.601 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:36.601 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:36.601 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:36.601 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:36.601 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:36.601 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:36.601 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:36.601 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:36.601 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:36.601 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:36.601 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:36.601 Removing: /var/run/dpdk/spdk2/config 00:29:36.601 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:36.601 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:36.601 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:36.601 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:36.601 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:36.601 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:36.601 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:36.601 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:36.601 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:36.601 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:36.601 Removing: /var/run/dpdk/spdk3/config 00:29:36.601 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:36.601 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:36.601 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:36.601 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:36.601 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:36.601 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:36.601 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:36.601 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:36.601 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:36.601 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:36.601 Removing: /var/run/dpdk/spdk4/config 00:29:36.601 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:36.601 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:36.601 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:36.601 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:36.601 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:36.601 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:36.601 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:36.601 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:36.601 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:36.601 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:36.601 Removing: /dev/shm/bdev_svc_trace.1 00:29:36.601 Removing: /dev/shm/nvmf_trace.0 00:29:36.601 Removing: /dev/shm/spdk_tgt_trace.pid3630725 00:29:36.601 Removing: /var/run/dpdk/spdk0 00:29:36.601 Removing: /var/run/dpdk/spdk1 00:29:36.601 Removing: /var/run/dpdk/spdk2 00:29:36.601 Removing: /var/run/dpdk/spdk3 00:29:36.601 Removing: /var/run/dpdk/spdk4 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3629176 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3629908 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3630725 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3631162 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3631849 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3631989 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3632707 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3632712 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3632956 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3634267 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3635306 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3635609 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3635801 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3636156 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3636576 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3636852 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3637019 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3637315 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3637507 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3639866 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3640029 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3640251 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3640320 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3640679 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3640754 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3641079 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3641191 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3641366 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3641486 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3641656 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3641669 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3642050 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3642308 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3642504 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3642671 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3642706 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3642882 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3643046 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3643201 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3643473 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3643631 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3643795 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3644063 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3644224 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3644385 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3644587 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3644813 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3644977 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3645131 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3645403 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3645561 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3645717 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3645991 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3646149 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3646316 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3646583 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3646745 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3646902 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3647113 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3649218 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3675848 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3678379 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3685359 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3688666 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3690911 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3691336 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3695419 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3699281 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3699283 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3699825 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3700480 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3701141 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3701541 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3701545 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3701690 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3701827 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3701829 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3702483 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3703155 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3703804 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3704614 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3704816 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3704968 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3705856 00:29:36.601 Removing: /var/run/dpdk/spdk_pid3706577 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3711960 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3712229 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3714775 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3718464 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3720611 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3727038 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3732172 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3733483 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3734150 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3745165 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3747396 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3772029 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3774825 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3776006 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3777323 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3777378 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3777482 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3777621 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3778052 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3779370 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3779977 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3780404 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3782016 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3782443 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3782903 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3785429 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3791494 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3794393 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3798682 00:29:36.602 Removing: /var/run/dpdk/spdk_pid3799744 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3800716 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3803397 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3805653 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3810018 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3810027 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3812815 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3812948 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3813201 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3813469 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3813477 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3816243 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3816586 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3819258 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3821117 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3824552 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3828001 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3835120 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3839489 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3839491 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3851896 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3852308 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3852827 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3853241 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3853823 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3854230 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3854663 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3855168 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3857690 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3857832 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3861760 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3861819 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3863543 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3869097 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3869105 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3872019 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3873416 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3874825 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3875570 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3876972 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3877852 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3883263 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3883552 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3883944 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3885504 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3885897 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3886180 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3888655 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3888667 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3890124 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3890491 00:29:36.860 Removing: /var/run/dpdk/spdk_pid3890616 00:29:36.860 Clean 00:29:36.860 14:08:31 -- common/autotest_common.sh@1451 -- # return 0 00:29:36.860 14:08:31 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:29:36.860 14:08:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:36.860 14:08:31 -- common/autotest_common.sh@10 -- # set +x 00:29:36.860 14:08:31 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:29:36.860 14:08:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:36.860 14:08:31 -- common/autotest_common.sh@10 -- # set +x 00:29:36.860 14:08:31 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:36.860 14:08:31 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:29:36.860 14:08:31 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:29:36.860 14:08:31 -- spdk/autotest.sh@391 -- # hash lcov 00:29:36.860 14:08:31 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:36.860 14:08:31 -- spdk/autotest.sh@393 -- # hostname 00:29:36.860 14:08:31 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:29:37.119 geninfo: WARNING: invalid characters removed from testname! 00:30:09.189 14:08:59 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:09.189 14:09:03 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:11.724 14:09:06 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:15.009 14:09:09 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:17.548 14:09:12 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:20.840 14:09:15 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:23.377 14:09:18 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:23.377 14:09:18 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:23.377 14:09:18 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:23.377 14:09:18 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:23.377 14:09:18 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:23.377 14:09:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.377 14:09:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.377 14:09:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.377 14:09:18 -- paths/export.sh@5 -- $ export PATH 00:30:23.377 14:09:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.377 14:09:18 -- common/autobuild_common.sh@472 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:23.377 14:09:18 -- common/autobuild_common.sh@473 -- $ date +%s 00:30:23.377 14:09:18 -- common/autobuild_common.sh@473 -- $ mktemp -dt spdk_1721045358.XXXXXX 00:30:23.378 14:09:18 -- common/autobuild_common.sh@473 -- $ SPDK_WORKSPACE=/tmp/spdk_1721045358.vyuJ7A 00:30:23.378 14:09:18 -- common/autobuild_common.sh@475 -- $ [[ -n '' ]] 00:30:23.378 14:09:18 -- common/autobuild_common.sh@479 -- $ '[' -n '' ']' 00:30:23.378 14:09:18 -- common/autobuild_common.sh@482 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:23.378 14:09:18 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:23.378 14:09:18 -- common/autobuild_common.sh@488 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:23.378 14:09:18 -- common/autobuild_common.sh@489 -- $ get_config_params 00:30:23.378 14:09:18 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:30:23.378 14:09:18 -- common/autotest_common.sh@10 -- $ set +x 00:30:23.378 14:09:18 -- common/autobuild_common.sh@489 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:30:23.378 14:09:18 -- common/autobuild_common.sh@491 -- $ start_monitor_resources 00:30:23.378 14:09:18 -- pm/common@17 -- $ local monitor 00:30:23.378 14:09:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:23.378 14:09:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:23.378 14:09:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:23.378 14:09:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:23.378 14:09:18 -- pm/common@21 -- $ date +%s 00:30:23.378 14:09:18 -- pm/common@25 -- $ sleep 1 00:30:23.378 14:09:18 -- pm/common@21 -- $ date +%s 00:30:23.378 14:09:18 -- pm/common@21 -- $ date +%s 00:30:23.378 14:09:18 -- pm/common@21 -- $ date +%s 00:30:23.378 14:09:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.release_build.sh.1721045358 00:30:23.378 14:09:18 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.release_build.sh.1721045358 00:30:23.378 14:09:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.release_build.sh.1721045358 00:30:23.378 14:09:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.release_build.sh.1721045358 00:30:23.378 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.release_build.sh.1721045358_collect-vmstat.pm.log 00:30:23.378 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.release_build.sh.1721045358_collect-cpu-load.pm.log 00:30:23.637 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.release_build.sh.1721045358_collect-cpu-temp.pm.log 00:30:23.637 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.release_build.sh.1721045358_collect-bmc-pm.bmc.pm.log 00:30:24.575 14:09:19 -- common/autobuild_common.sh@492 -- $ trap stop_monitor_resources EXIT 00:30:24.575 14:09:19 -- spdk/release_build.sh@10 -- $ [[ 0 -eq 1 ]] 00:30:24.575 14:09:19 -- spdk/release_build.sh@1 -- $ stop_monitor_resources 00:30:24.575 14:09:19 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:24.575 14:09:19 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:24.575 14:09:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:24.575 14:09:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:24.575 14:09:19 -- pm/common@44 -- $ pid=3900870 00:30:24.575 14:09:19 -- pm/common@50 -- $ kill -TERM 3900870 00:30:24.575 14:09:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:24.575 14:09:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:24.575 14:09:19 -- pm/common@44 -- $ pid=3900872 00:30:24.575 14:09:19 -- pm/common@50 -- $ kill -TERM 3900872 00:30:24.575 14:09:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:24.575 14:09:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:24.575 14:09:19 -- pm/common@44 -- $ pid=3900874 00:30:24.575 14:09:19 -- pm/common@50 -- $ kill -TERM 3900874 00:30:24.575 14:09:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:24.575 14:09:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:24.575 14:09:19 -- pm/common@44 -- $ pid=3900902 00:30:24.575 14:09:19 -- pm/common@50 -- $ sudo -E kill -TERM 3900902 00:30:24.575 + [[ -n 3545221 ]] 00:30:24.575 + sudo kill 3545221 00:30:24.586 [Pipeline] } 00:30:24.606 [Pipeline] // stage 00:30:24.611 [Pipeline] } 00:30:24.625 [Pipeline] // timeout 00:30:24.629 [Pipeline] } 00:30:24.642 [Pipeline] // catchError 00:30:24.647 [Pipeline] } 00:30:24.661 [Pipeline] // wrap 00:30:24.666 [Pipeline] } 00:30:24.676 [Pipeline] // catchError 00:30:24.682 [Pipeline] stage 00:30:24.684 [Pipeline] { (Epilogue) 00:30:24.695 [Pipeline] catchError 00:30:24.697 [Pipeline] { 00:30:24.710 [Pipeline] echo 00:30:24.711 Cleanup processes 00:30:24.716 [Pipeline] sh 00:30:25.032 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:25.032 3901024 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:30:25.032 3901123 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:25.070 [Pipeline] sh 00:30:25.350 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:25.350 ++ grep -v 'sudo pgrep' 00:30:25.350 ++ awk '{print $1}' 00:30:25.350 + sudo kill -9 3901024 00:30:25.363 [Pipeline] sh 00:30:25.646 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:33.781 [Pipeline] sh 00:30:34.067 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:34.067 Artifacts sizes are good 00:30:34.083 [Pipeline] archiveArtifacts 00:30:34.090 Archiving artifacts 00:30:34.288 [Pipeline] sh 00:30:34.572 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:30:34.587 [Pipeline] cleanWs 00:30:34.602 [WS-CLEANUP] Deleting project workspace... 00:30:34.602 [WS-CLEANUP] Deferred wipeout is used... 00:30:34.621 [WS-CLEANUP] done 00:30:34.623 [Pipeline] } 00:30:34.644 [Pipeline] // catchError 00:30:34.659 [Pipeline] sh 00:30:34.943 + logger -p user.info -t JENKINS-CI 00:30:34.952 [Pipeline] } 00:30:34.968 [Pipeline] // stage 00:30:34.975 [Pipeline] } 00:30:34.993 [Pipeline] // node 00:30:34.999 [Pipeline] End of Pipeline 00:30:35.030 Finished: SUCCESS